1

need help with async/await.

currently studying https://github.com/tensorflow/tfjs-converter.

and I'm stumped at this part of the code (loading my python converted saved js model for use in the browser):

import * as tf from '@tensorflow/tfjs';
import {loadFrozenModel} from '@tensorflow/tfjs-converter';

/*1st model loader*/
const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

/*2nd model execution in browser*/
const cat = document.getElementById('cat');
model.execute({input: tf.fromPixels(cat)});

I noticed it's using es6 (import/export) and es2017 (async/await) so I've used babel with babel-preset-env, babel-polyfill and babel-plugin-transform-runtime. I've used webpack but switched over to Parcel as my bundler (as suggested by the tensorflow.js devs). In both bundlers I keep getting the error that the await should be wrapped in an async function so I wrapped the first part of the code in an async function hoping to get a Promise.

async function loadMod(){

const MODEL_URL = './model/web_model.pb';
const WEIGHTS_URL = '.model/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

} 

loadMod();

now both builders say that the 'await is a reserved word'. vscode eslinter says that loadMod(); has a Promise void. (so the promise failed or got rejected?) I'm trying to reference the javascript model files using a relative path or is this wrong? I have to 'serve' the ML model from the cloud? It can't be from a relative local path?

Any suggestions would be much appreciated. Thanks!

edkeveked
  • 14,876
  • 8
  • 45
  • 80
RadEdje
  • 41
  • 4

2 Answers2

0

You try to use this function

tf.loadFrozenModel(MODEL_FILE_URL, WEIGHT_MANIFEST_FILE_URL)

And your code has a syntax error. If you use the key words 'await', you must define one async function, such as below:

async function run () {

  /*1st model loader*/
  MODEL_URL = './model/web_model.pb';
  const WEIGHTS_URL = '.model/weights_manifest.json';
  const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);

 /*2nd model execution in browser*/
 const cat = document.getElementById('cat');
 model.execute({input: tf.fromPixels(cat)});

}
run();
Stephen Rauch
  • 40,722
  • 30
  • 82
  • 105
0

tf.loadFrozenModel uses fetch under the hood. Fetch is used to get a file served by a server and cannot be used with local files unless those are served by a server. See this answer for more.

For loadFrozenModel to work with local files, those files needs to be served by a server. One can use http-server to serve the model topology and its weights.

 // install the http-server module
 npm install http-server -g

 // cd to the repository containing the files
 // launch the server to serve static files of model topology and weights
 http-server -c1 --cors .

 // load model in js script
 (async () => {
   ...
   const model = await tf.loadFrozenModel('http://localhost:8080/tensorflowjs_model.pb', 'http://localhost:8080/weights_manifest.json')
 })()
edkeveked
  • 14,876
  • 8
  • 45
  • 80