sklearn: Como acelerar um vetorizador (por exemplo, Tfidfvectorizer)

Depois de traçar um perfil completo do meu programa, consegui identificar que ele está sendo retardado pelo vetorizador.

Estou trabalhando com dados de texto e duas linhas de vetorização simples de unigrama tfidf estão ocupando 99,2% do tempo total que o código leva para executar.

Aqui está um exemplo executável (isso fará o download de um arquivo de treinamento de 3 MB para o seu disco, omita as partes do urllib para executar em sua própria amostra):

#####################################
# Loading Data
#####################################
import urllib
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk.stem  
raw = urllib.urlopen("https://s3.amazonaws.com/hr-testcases/597/assets/trainingdata.txt").read()
file = open("to_delete.txt","w").write(raw)
###
def extract_training():
    f = open("to_delete.txt")
    N = int(f.readline())
    X = []
    y = []
    for i in xrange(N):
        line  = f.readline()
        label,text =  int(line[0]), line[2:]
        X.append(text)
        y.append(label)
    return X,y
X_train, y_train =  extract_training()    
#############################################
# Extending Tfidf to have only stemmed features
#############################################
english_stemmer = nltk.stem.SnowballStemmer('english')

class StemmedTfidfVectorizer(TfidfVectorizer):
    def build_analyzer(self):
        analyzer = super(TfidfVectorizer, self).build_analyzer()
        return lambda doc: (english_stemmer.stem(w) for w in analyzer(doc))

tfidf = StemmedTfidfVectorizer(min_df=1, stop_words='english', analyzer='word', ngram_range=(1,1))
#############################################
# Line below takes 6-7 seconds on my machine
#############################################
Xv = tfidf.fit_transform(X_train) 

Eu tentei converter a listaX_train em um np.array, mas não houve diferença no desempenho.

questionAnswers(1)

yourAnswerToTheQuestion