I have used the docker one-liner to use the universal recommender.
I have used the following config file.
{
"engineId": "test",
"engineFactory": "com.actionml.engines.ur.UREngine",
"sparkConf": {
"master": "local",
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.kryo.registrator": "org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator",
"spark.kryo.referenceTracking": "false",
"spark.kryoserializer.buffer": "300m",
"spark.executor.memory": "20g",
"spark.driver.memory": "10g",
"spark.es.index.auto.create": "true",
"es.index.auto.create": "true",
"spark.es.nodes": "elasticsearch",
"spark.es.nodes.wan.only": "true"
},
"algorithm":{
"indicators": [
{
"name": "read",
"maxCorrelatorsPerItem": 5
}
]
}
}
After adding events 'read', two for two separate users. After training, running a simple query with an empty JSON returns all the names of the items but with their corresponding scores as 0.
I would like to know how to obtain non-zero scores for the results.
I have used the docker one-liner to use the universal recommender.
I have used the following config file.
{ "engineId": "test", "engineFactory": "com.actionml.engines.ur.UREngine", "sparkConf": { "master": "local", "spark.serializer": "org.apache.spark.serializer.KryoSerializer", "spark.kryo.registrator": "org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator", "spark.kryo.referenceTracking": "false", "spark.kryoserializer.buffer": "300m", "spark.executor.memory": "20g", "spark.driver.memory": "10g", "spark.es.index.auto.create": "true", "es.index.auto.create": "true", "spark.es.nodes": "elasticsearch", "spark.es.nodes.wan.only": "true" }, "algorithm":{ "indicators": [ { "name": "read", "maxCorrelatorsPerItem": 5 } ] } }After adding events 'read', two for two separate users. After training, running a simple query with an empty JSON returns all the names of the items but with their corresponding scores as 0.
I would like to know how to obtain non-zero scores for the results.