0

When training a model, you first instantiate it with something like the following:

estimator = tf.estimator.DNNClassifier(
    model_dir=model_dir,
    feature_columns=[tf.feature_column.numeric_column(key=str(i)) for i in range(2, eval_data.shape[1] - 1)],
    hidden_units=hidden_units,
    n_classes=args['num_classes'],
    config=config,
    dropout=dropout,
    optimizer=tf.train.AdamOptimizer(
        learning_rate=args['learning_rate'],
    ))

After training, there will be a directory under the path indicated by model_dir with the checkpoints and other artifacts:

model_dir/
   my_model
      checkpoint
      eval
      graph.pbtxt
      model.ckpt-0.data-00000-of-00002
      model.ckpt-0.data-00001-of-00002
      model.ckpt-0.index
      model.ckpt-0.meta

The issue is that if I want to do more work with this model (besides serving, in which case I could export the model and use TF Serving) somewhere else (meaning that I want to re-instantiate the model), like more training, evaluating, etc., I will first need to instantiate the model. The issue is that I need to somehow keep track of the values of all of those parameters.

Is there a way to load a model by just pointing at the model_dir, and somehow get an instance of the model loaded into memory?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

Browse other questions tagged or ask your own question.