É possível dimensionar dados por grupo no Spark?

Eu quero dimensionar dados comStandardScaler (from pyspark.mllib.feature import StandardScaler), agora posso fazê-lo passando os valores de RDD para transformar a função, mas o problema é que quero preservar a chave. existe alguma maneira de escalar meus dados preservando sua chave?

Conjunto de dados de amostra

0,tcp,http,SF,181,5450,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,8,8,0.00,0.00,0.00,0.00,1.00,0.00,0.00,9,9,1.00,0.00,0.11,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,239,486,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,8,8,0.00,0.00,0.00,0.00,1.00,0.00,0.00,19,19,1.00,0.00,0.05,0.00,0.00,0.00,0.00,0.00,normal.
0,tcp,http,SF,235,1337,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,8,8,0.00,0.00,0.00,0.00,1.00,0.00,0.00,29,29,1.00,0.00,0.03,0.00,0.00,0.00,0.00,0.00,smurf.

Importações

import sys
import os
from collections import OrderedDict
from numpy import array
from math import sqrt
try:
    from pyspark import SparkContext, SparkConf
    from pyspark.mllib.clustering import KMeans
    from pyspark.mllib.feature import StandardScaler
    from pyspark.statcounter import StatCounter

    print ("Successfully imported Spark Modules")
except ImportError as e:
    print ("Can not import Spark Modules", e)
    sys.exit(1)

Parte do código

    sc = SparkContext(conf=conf)   
    raw_data = sc.textFile(data_file)
    parsed_data = raw_data.map(Parseline)

Parseline função:

def Parseline(line):
    line_split = line.split(",")
    clean_line_split = [line_split[0]]+line_split[4:-1]
    return (line_split[-1], array([float(x) for x in clean_line_split]))

questionAnswers(1)

yourAnswerToTheQuestion