How much should I trust swarm results?

swarming
#20

Here’s my main loop:

predField = 'media_wear_pct'
inclFIELDS = ['media_wear_pct', 'device_temp_c']
numRecords = 25000

    PARAMS_encoders = PARAMS['modelParams']['sensorParams']['encoders']
    model = ModelFactory.create(PARAMS)
    model.enableInference({'predictedField': predField})

    print('HTM Model Initiated')
    print(' --> inclFIELD_SET = {}'.format(inclFIELDS))
    print(' --> Pred Field = {}'.format(predField))
    print(' ')
    results = list()
    DF_NuPIC = pd.DataFrame(DF_input[inclFIELDS].head(numRecords))
    for index, row in DF_NuPIC.iterrows():
        modelInput = dict()
        for field in inclFIELDS:
            if field in PARAMS_encoders:
                modelInput[field] = row[field]

        resultObj = model.run(modelInput)
        predFieldVal = row[predField]
        bestPredictions = resultObj.inferences["multiStepBestPredictions"]
        allPredictions = resultObj.inferences["multiStepPredictions"]
        AnomScore = resultObj.inferences["anomalyScore"]
        AnomLikl_obj = anomaly_likelihood.AnomalyLikelihood()
        AnomLikl = AnomLikl_obj.anomalyProbability(modelInput[self.predField], AnomScore, index)
        oneStep = bestPredictions[1]
        try: oneStepConfidence = allPredictions[1][oneStep]
        except: oneStepConfidence = 0
        result = {'index':index, 'AScore':AnomScore, 'ALikl':AnomLikl, 'predFieldVal':predFieldVal, 'Pred_1': oneStep, 'Pred_1_conf':oneStepConfidence*100}
#21

I also wonder if there may be something wrong here, since my Anomaly Likelihoods are still flat-lined at 0.5, even though the data has over 20,000 rows.

#22

Here’s the plot when I did not change the swarm params (using AdaptiveScalarEncoders):

(u'media_wear_pct', ' -- ', None)
('_classifierInput', ' -- ', {'classifierOnly': True, 'name': '_classifierInput', 'clipInput': True, 'n': 371, 'fieldname': 'media_wear_pct', 'w': 21, 'type': 'AdaptiveScalarEncoder'})
(u'device_temp_c', ' -- ', {'name': 'device_temp_c', 'clipInput': True, 'n': 283, 'fieldname': 'device_temp_c', 'w': 21, 'type': 'AdaptiveScalarEncoder'})
(u'power_on_hours', ' -- ', None)
(u'libversion', ' -- ', None)
(u'write_amp', ' -- ', None)

Pred_v_TARGET

#23

Those parameters don’t make sense to me for several reasons:

  • there is no predictedField
  • the classifier is expecting input from media_wear_pct, which doesn’t have encoder params
  • boostStrength is 0.0

Can we step back a step and see what your swarm description file looked like? Either the swarm was not run correctly (bad config?), or there is a bug in the swarming code.

Anyway, why are you swarming? Maybe we can think this problem through and come up with some logical starting parameters and see how well they work.

1 Like
#24

Sam and I kind of took this off on a tangent, but let me restate the question and give a clear answer in case anyone is unsure at this point:

Q: Should I be wary of any parameters chosen by swarming?

A: Yes!

Swarming is a tool used to search an insanely huge parameter space, and its results should always be scrutinized. Sometimes they won’t make sense, but with a few tweaks they might get you headed in the right direction.

1 Like
HTM Hackers' Hangout - Mar 1, 2019
#25

So I’m doing swarming cause I want to have generic anomaly detection and prediction methods in place, so that when I pass in a new Dataframe and ‘TARGET’ field it does the rest automatically (using some basic descriptive stats to set the minval/maxval ranges for the ScalarEncoders).

I am extremely curious to delve into the swarm and see what’s up! Though I tried running the model without swarming, just setting the params to so that the ‘classifierInput’ field is also in the input encoding and with the same minval/maxval/n/w values, and still the same problem!:

for k,v in OBJ_NupicAnom.MODEL_PARAMS['modelParams']['sensorParams']['encoders'].items():
    print(k,' -- ',v)

(u'media_wear_pct', ' -- ', {'maxval': 2472.0, 'name': u'media_wear_pct', 'minval': 666.0, 'clipInput': True, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})
('_classifierInput', ' -- ', {'maxval': 2472.0, 'classifierOnly': True, 'name': u'media_wear_pct', 'minval': 666.0, 'clipInput': True, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})

Here’s the resulting plot:
Pred_v_TARGET%20--%20%5B'media_wear_pct'%5D

Here again is my main loop:

_inclFIELDS = ['media_wear_pct']
predField = 'media_wear_pct'
numRecords = 20000
model = ModelFactory.create(PARAMS)
model.enableInference({'predictedField': predField})
print('HTM Model Initiated')
print(' ')
results = list()
DF_NuPIC = pd.DataFrame(DF_input[_inclFIELDS].head(numRecords))
for index, row in DF_NuPIC.iterrows():
    modelInput = dict()
    for field in _inclFIELDS:
        modelInput[field] = row[field]

    resultObj = model.run(modelInput)
    predFieldVal = row[self.predField]
    TargetVal = predFieldVal
    bestPredictions = resultObj.inferences["multiStepBestPredictions"]
    allPredictions = resultObj.inferences["multiStepPredictions"]
    AnomScore = resultObj.inferences["anomalyScore"]
    AnomLikl_obj = anomaly_likelihood.AnomalyLikelihood()
    AnomLikl = AnomLikl_obj.anomalyProbability(modelInput[self.predField], AnomScore, index)
    oneStep = bestPredictions[1]
    try: oneStepConfidence = allPredictions[1][oneStep]
    except: oneStepConfidence = 0
    result = {'index':index, 'AScore':AnomScore, 'ALikl':AnomLikl, 'predFieldVal':predFieldVal, 'Target':TargetVal, 'Pred_1': oneStep, 'Pred_1_conf':oneStepConfidence*100}
#26

Here are all the pieces of the swarm config as well:

swarmCONFIG = OBJ_params.SWARMconfig
swarmCONFIG['inferenceArgs']
{'predictionSteps': [1], 'predictedField': 'media_wear_pct'}

swarmCONFIG['iterationCount']
-1

swarmCONFIG['swarmSize']
'large'

swarmCONFIG['includedFields']
[{'fieldName': u'device_temp_c', 'fieldType': 'float'}, {'fieldName': u'libversion', 'fieldType': 'float'}, {'fieldName': u'media_wear_pct', 'fieldType': 'float'}, {'fieldName': u'power_on_hours', 'fieldType': 'float'}, {'fieldName': u'write_amp', 'fieldType': 'float'}]

swarmCONFIG['streamDef']
{'info': 'media_wear_pct', 'version': 1, 'streams': [{'info': "ISO_Feb16.csv -- ['AGG'] -- 24_OUT.csv", 'source': "file:///Users/arnavkundu/Desktop/DATA_InOut/Output/PreProc = [['AGG'], ['AGG', 'DIFF']] -- AggWindows = [24, 18, 12]/ISO_Feb16.csv -- ['AGG'] -- 24_OUT.csv", 'columns': ['*'], 'last_record': 2000}]}

swarmCONFIG['inferenceType']
'TemporalMultiStep'
#27

Ok hold up, I want to figure out why your predictions are plotting at a seemingly different scale and resolution than actual input values, and you’re telling me this is happening even when you run params where the classifier matches the predicted field encoder params that did not result in a swarm.

So swarming is not the problem here. I’m not sure how to help you at this point, but if i were you I would set a debugger to stop just before the computation and step into the algorithms to see at what point a bad value gets introduced into the result.

#28

Could you elaborate a bit? Like dissecting the ‘model.run(modelInput)’ line, or maybe the mode object itself?
Here’s my process right now:

for k,v in PARAMS['modelParams']['sensorParams']['encoders'].items():
    print(k,' -- ',v)

(u'media_wear_pct', ' -- ', {'maxval': 2472.0, 'name': u'media_wear_pct', 'clipInput': True, 'minval': 666.0, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})
('_classifierInput', ' -- ', {'maxval': 2472.0, 'classifierOnly': True, 'name': u'media_wear_pct', 'clipInput': True, 'minval': 666.0, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})

predField = 'media_wear_pct'
model = ModelFactory.create(PARAMS)
model.enableInference({'predictedField': predField})

DF_NuPIC = pd.DataFrame(self.DF_input[_inclFIELDS].head(self.numRecords))
DF_NuPIC.head()
   media_wear_pct
0           912.0
1           912.0
2           912.0
3           912.0
4           897.0

DF_NuPIC = pd.DataFrame(self.DF_input[_inclFIELDS].head(self.numRecords))
for index, row in DF_NuPIC.iterrows():
    modelInput = dict()
    for field in _inclFIELDS:
        modelInput[field] = row[field]

    resultObj = model.run(modelInput)
    predFieldVal = row[self.predField]
    TargetVal = predFieldVal
    bestPredictions = resultObj.inferences["multiStepBestPredictions"]
    allPredictions = resultObj.inferences["multiStepPredictions"]
    AnomScore = resultObj.inferences["anomalyScore"]
    AnomLikl_obj = anomaly_likelihood.AnomalyLikelihood()
    AnomLikl = AnomLikl_obj.anomalyProbability(modelInput[self.predField], AnomScore, index)
    oneStep = bestPredictions[1]
    try: oneStepConfidence = allPredictions[1][oneStep]
    except: oneStepConfidence = 0
    result = {'index':index, 'AScore':AnomScore, 'ALikl':AnomLikl, 'predFieldVal':predFieldVal, 'Target':TargetVal, 'Pred_1': oneStep, 'Pred_1_conf':oneStepConfidence*100}
    if index%100 == 0:
        print('index: {} ---- AScore = {} ---- ALikl = {} ---- predField = {} ---- Target = {} ---- 1-Step = {} ---- Conf. = {}'.format( result['index'],result['AScore'],result['ALikl'],result['predFieldVal'],result['Target'],result['Pred_1'],result['Pred_1_conf']) )
    results.append(result)
    
    print('Index = {}'.format(index))
    print(' -- modelInput = {}'.format(modelInput))
    print(' -- predFieldVal = {}'.format(predFieldVal))
    print(' -- Pred_1 = {}'.format(oneStep))
    if index == 5:
        break

modelInput dicts - steps 0-4
Index = 0
– modelInput = {‘media_wear_pct’: 912.0}
– predFieldVal = 912.0
– Pred_1 = 29.0
Index = 1
– modelInput = {‘media_wear_pct’: 912.0}
– predFieldVal = 912.0
– Pred_1 = 29.0
Index = 2
– modelInput = {‘media_wear_pct’: 912.0}
– predFieldVal = 912.0
– Pred_1 = 29.0
Index = 3
– modelInput = {‘media_wear_pct’: 912.0}
– predFieldVal = 912.0
– Pred_1 = 29.0
Index = 4
– modelInput = {‘media_wear_pct’: 897.0}
– predFieldVal = 897.0
– Pred_1 = 27.0
Index = 5
– modelInput = {‘media_wear_pct’: 888.0}
– predFieldVal = 888.0
– Pred_1 = 26.0

model object attributes
model.getInferenceArgs()
{‘predictedField’: ‘media_wear_pct’}

model.getFieldInfo()
(FieldMetaInfoBase(name=u’media_wear_pct’, type=‘float’, special=’’),)

model.getInferenceType()
‘TemporalAnomaly’

model.getRuntimeStats()
{‘numRunCalls’: 8, ‘TemporalNextStep’: {}}

model.getSchema()
<capnp.lib.capnp._StructModule object at 0x112cdb4d0>

Are there any other model attributes that I could check? It’s just so nuts that its predicting 29 when the classifier min is 666!

#29

Here is a print of the total PARAMS dict I’m using:

AGGREGATIONINFO
seconds
 -- 0
fields
 -- [[u'device_temp_c', 'sum'], [u'libversion', 'sum'], [u'media_wear_pct', 'sum'], [u'power_on_hours', 'sum'], [u'write_amp', 'sum']]
months
 -- 0
days
 -- 0
years
 -- 0
hours
 -- 0
microseconds
 -- 0
weeks
 -- 0
minutes
 -- 0
milliseconds
 -- 0
 
('model', ' -- ', 'HTMPrediction')
 
('version', ' -- ', 1)
 
('predictAheadTime', ' -- ', 'null')
 
MODELPARAMS
sensorParams
 ---- verbosity = 0
 -------- media_wear_pct = {'maxval': 2472.0, 'name': u'media_wear_pct', 'clipInput': True, 'minval': 666.0, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'}
 -------- _classifierInput = {'maxval': 2472.0, 'classifierOnly': True, 'name': u'media_wear_pct', 'clipInput': True, 'minval': 666.0, 'n': 231, 'fieldname': u'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'}
 ---- sensorAutoReset = None
spParams
 ---- columnCount = 2048
 ---- spVerbosity = 0
 ---- localAreaDensity = -1.0
 ---- spatialImp = cpp
 ---- inputWidth = 946
 ---- synPermInactiveDec = 0.005
 ---- synPermConnected = 0.1
 ---- synPermActiveInc = 0.04
 ---- seed = 1965
 ---- numActiveColumnsPerInhArea = 40
 ---- boostStrength = 3.0
 ---- globalInhibition = 1
 ---- potentialPct = 0.85
trainSPNetOnlyIfRequested
 -- False
clParams
 ---- maxCategoryCount = 1000
 ---- implementation = cpp
 ---- verbosity = 0
 ---- steps = 1
 ---- alpha = 0.005
 ---- regionName = SDRClassifierRegion
tmParams
 ---- columnCount = 2048
 ---- activationThreshold = 16
 ---- pamLength = 1
 ---- cellsPerColumn = 32
 ---- permanenceInc = 0.1
 ---- minThreshold = 12
 ---- verbosity = 0
 ---- maxSynapsesPerSegment = 32
 ---- outputType = normal
 ---- globalDecay = 0.0
 ---- initialPerm = 0.21
 ---- permanenceDec = 0.1
 ---- seed = 1960
 ---- maxAge = 0
 ---- newSynapseCount = 20
 ---- maxSegmentsPerCell = 128
 ---- temporalImp = cpp
 ---- inputWidth = 2048
tmEnable
 -- True
spEnable
 -- True
inferenceType
 -- TemporalAnomaly
#30

Yes exactly, I’m out of ideas. I’ll ask Luiz for help today. Maybe he has an idea.

1 Like
#31

Thanks! And if there’s any more outputs I can print or anything let me know!! :sweat_smile:

#32

It’d also be comparably useful if we could fix the problem with the Anomaly Likelihood calculation. Right now it just sits flat at 0.5.

from nupic.frameworks.opf.model_factory import ModelFactory
from nupic.algorithms import anomaly_likelihood
from nupic.swarming import permutations_runner

for index, row in DF_NuPIC.iterrows():
    resultObj = model.run(modelInput)
    predFieldVal = row[self.predField]
    TargetVal = predFieldVal
    bestPredictions = resultObj.inferences["multiStepBestPredictions"]
    allPredictions = resultObj.inferences["multiStepPredictions"]
    AnomScore = resultObj.inferences["anomalyScore"]
    AnomLikl_obj = anomaly_likelihood.AnomalyLikelihood()
    AnomLikl = AnomLikl_obj.anomalyProbability(modelInput[self.predField], AnomScore, index)
#33

No new ideas, except to debug and inspect the model object, look for the classifier and see what class it is?

This is a completely different issue. Please create a new thread if you want to address.

1 Like
#34

I think I may finally have a lead on this issue!!

So here are the result objects for the first few time steps (resultObj = model.run(modelInput)). This model only has one field (the predicted field ‘media_wear_pct’) with min/max values around 600-100, though the predictions are coming back around ~250. It seems that the ‘bucketIndex’ is becoming the prediction in ‘multiStepPredictions’. Do you have any intuition how/why this might be? Maybe I haven’t properly engaged the classifier? Here is my current code:

**MAIN LOOP:**
    results = list()
    AnomLikl_obj = anomaly_likelihood.AnomalyLikelihood()
    DF_NuPIC = pd.DataFrame(self.DF_input[_inclFIELDS].head(self.numRecords))
    for index, row in DF_NuPIC.iterrows():
        modelInput = dict()
        for field in _inclFIELDS:
            modelInput[field] = row[field]
        resultObj = model.run(modelInput)

        if index < 3:
            print(resultObj)

        AnomScore = resultObj.inferences["anomalyScore"]
        AnomLikl = AnomLikl_obj.anomalyProbability(row[field], AnomScore, timestamp=None)

    predFieldVal = row[self.predField]
    bestPredictions = resultObj.inferences["multiStepBestPredictions"]
    allPredictions = resultObj.inferences["multiStepPredictions"]
    oneStep = bestPredictions[1]


**OUTPUT:**
    HTM Model Initiated
     --> inclFIELD_SET = ['media_wear_pct']
     --> Pred Field = media_wear_pct
     --> MYENC = ...
    ('---->', u'media_wear_pct', ' -- ', {'maxval': 1020.0, 'name': 'media_wear_pct', 'clipInput': True, 'minval': 626.14999999999998, 'n': 275, 'fieldname': 'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})
    ('---->', '_classifierInput', ' -- ', {'maxval': 1020.0, 'classifierOnly': True, 'name': 'media_wear_pct', 'clipInput': True, 'minval': 626.14999999999998, 'n': 275, 'fieldname': 'media_wear_pct', 'w': 21, 'type': 'ScalarEncoder'})
    ('---->', u'media_written_gb', ' -- ', None)
    ('---->', u'device_temp_c', ' -- ', None)
    ('---->', u'power_on_hours', ' -- ', None)
    ('---->', u'write_amp', ' -- ', None)
    ('---->', u'drive_writes', ' -- ', None)
    ('---->', u'host_written_gb', ' -- ', None)
    ('---->', u'host_read_gb', ' -- ', None)
 
ModelResult(	predictionNumber=0
	rawInput={'media_wear_pct': 1020.0}
	sensorInput=SensorInput(	dataRow=(1020.0,)
	dataDict={'media_wear_pct': 1020.0}
	dataEncodings=[array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.], dtype=float32)]
	sequenceReset=0.0
	category=-1
)
	inferences={'multiStepPredictions': {1: {254.0: 1.0}}, 'multiStepBucketLikelihoods': {1: {0: 1.0}}, 'multiStepBestPredictions': {1: 254.0}, 'anomalyLabel': '[]', 'anomalyScore': 1.0}
	metrics=None
	predictedFieldIdx=0
	predictedFieldName=media_wear_pct
	classifierInput=ClassifierInput(	dataRow=1020.0
	bucketIndex=254
)
)
index: 0 ---- AScore = 1.0 ---- ALikl = 0.5 ---- predField = 1020.0 ---- 1-Step = 254.0 ---- Conf. = 1.0
ModelResult(	predictionNumber=1
	rawInput={'media_wear_pct': 1020.0}
	sensorInput=SensorInput(	dataRow=(1020.0,)
	dataDict={'media_wear_pct': 1020.0}
	dataEncodings=[array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.], dtype=float32)]
	sequenceReset=0.0
	category=-1
)
	inferences={'multiStepPredictions': {1: {254.0: 0.99999999999999689}}, 'multiStepBucketLikelihoods': {1: {0: 0.99999999999999689}}, 'multiStepBestPredictions': {1: 254.0}, 'anomalyLabel': '[]', 'anomalyScore': 1.0}
	metrics=None
	predictedFieldIdx=0
	predictedFieldName=media_wear_pct
	classifierInput=ClassifierInput(	dataRow=1020.0
	bucketIndex=254
)
)
index: 1 ---- AScore = 1.0 ---- ALikl = 0.5 ---- predField = 1020.0 ---- 1-Step = 254.0 ---- Conf. = 1.0
ModelResult(	predictionNumber=2
	rawInput={'media_wear_pct': 1020.0}
	sensorInput=SensorInput(	dataRow=(1020.0,)
	dataDict={'media_wear_pct': 1020.0}
	dataEncodings=[array([ 0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,  0.,
        0.,  0.,  0.,  0.,  0.,  0.,  0.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,  1.,
        1.,  1.], dtype=float32)]
	sequenceReset=0.0
	category=-1
)
	inferences={'multiStepPredictions': {1: {254.0: 1.0}}, 'multiStepBucketLikelihoods': {1: {0: 1.0}}, 'multiStepBestPredictions': {1: 254.0}, 'anomalyLabel': '[]', 'anomalyScore': 1.0}
	metrics=None
	predictedFieldIdx=0
	predictedFieldName=media_wear_pct
	classifierInput=ClassifierInput(	dataRow=1020.0
	bucketIndex=254
)
)
index: 2 ---- AScore = 1.0 ---- ALikl = 0.5 ---- predField = 1020.0 ---- 1-Step = 254.0 ---- Conf. = 1.0
index: 3 ---- AScore = 0.0 ---- ALikl = 0.5 ---- predField = 1020.0 ---- 1-Step = 254.0 ---- Conf. = 1.0
index: 4 ---- AScore = 0.0 ---- ALikl = 0.5 ---- predField = 1020.0 ---- 1-Step = 254.0 ---- Conf. = 1.0
Matt is offline next week
#35

I’m going to have to call in @subutai for this one. Here are the model params Sam used. Any idea why the bucketIndex would be the same value as the best prediction?

1 Like
#36

Beats me as of now. The classifier should be taking the ‘bucketIndex’ and converting it back into the min/max range given in the ‘_classifierInput’ encoding, right?

Thanks!!

#37

Yes its definitely putting the bucketIndex in the ‘multiStepPredictions’, I just used another data set and getting predicted ‘0’ with min/max in the thousands. I must not be using the classifier correctly. Might I have to do something in particular to turn it on right?

#38

Maybe there’s a function somewhere I can use to convert the ‘bucketIndex’ directly into a predicted value? I suppose that’s what the classifier does, though maybe I could just use the same logic outside of it?

#39

Might a function like this work to convert the ‘bucketIndex’ into a predicted value?

def predFromBucketID(bucketID, predField_min, predField_max, predField_n, predField_w):
    max_bucketID =  (predField_n - predField_w)
    bucketID_norm = (bucketID / max_bucketID)
    predField_range  = (predField_max-predField_min)
    predFieldVal = predField_min + (predField_range * bucketID_norm)
    return predFieldVal