python中自定义模型提交到spark集群
大数据时代,数据均采用集群存储方式,那么在应用这些数据做模型训练时,遇到的一个问题就是,如何将各种模型直接运行到spark集群,经调研发现可以通过将其进行类封装的方式实现集群运行,具体实现方式如下:
1、开发环境准备:pytorch和sparktorch包必备
2、示例代码如下:
from pyspark.sql.types import DoubleType
from pyspark import keyword_only
from pyspark.ml.param.shared import HasOutputCols, Param, Params, HasInputCol
from pyspark.ml import Pipeline, PipelineModel
from sparktorch.pipeline_util import PysparkReaderWriter
from pyspark.ml import Model
from sparktorch import PysparkPipelineWrapper
from pyspark.ml.regression import LinearRegression, LinearRegressionModel
from pyspark.ml.util import Identifiable, MLReadable, MLWritable
from pyspark.ml.param import TypeConverters
spark = SparkSession.builder \
.enableHiveSupport() \
.getOrCreate()
df = spark.read.table('hive_table_name')
class SplitCol(Model, HasInputCol, HasOutputCol