summaryrefslogtreecommitdiffstats
path: root/toolkit/components/ml/docs
diff options
context:
space:
mode:
Diffstat (limited to 'toolkit/components/ml/docs')
-rw-r--r--toolkit/components/ml/docs/index.md44
-rw-r--r--toolkit/components/ml/docs/index.rst47
2 files changed, 47 insertions, 44 deletions
diff --git a/toolkit/components/ml/docs/index.md b/toolkit/components/ml/docs/index.md
deleted file mode 100644
index 1b2015456b..0000000000
--- a/toolkit/components/ml/docs/index.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Machine Learning
-
-This component is an experimental machine learning local inference engine. Currently there is no inference engine actually integrated yet.
-
-Here is an example of the API:
-
-```js
-// The engine process manages the life cycle of the engine. It runs in its own process.
-// Models can consume large amounts of memory, and this helps encapsulate it at the
-// operating system level.
-const EngineProcess = ChromeUtils.importESModule("chrome://global/content/ml/EngineProcess.sys.mjs");
-
-// The MLEngineParent is a JSActor that can communicate with the engine process.
-const mlEngineParent = await EngineProcess.getMLEngineParent();
-
-
-/**
- * When implementing a model, there should be a class that provides a `getModel` function
- * that is responsible for providing the `ArrayBuffer` of the model. Typically this
- * download is managed by RemoteSettings.
- */
-class SummarizerModel {
- /**
- * @returns {ArrayBuffer}
- */
- static getModel() { ... }
-}
-
-// An engine can be created using a unique name for the engine, and the function
-// to get the model. This class handles the life cycle of the engine.
-const summarizer = mlEngineParent.getEngine(
- "summarizer",
- SummarizerModel.getModel
-);
-
-// In order to run the model, use the `run` method. This will initiate the engine if
-// it is needed, and return the result. The messaging to the engine process happens
-// through a MessagePort.
-const result = await summarizer.run("A sentence that can be summarized.")
-
-// The engine can be explicitly terminated, or it will be destroyed through an idle
-// timeout when not in use, as the memory requirements for models can be quite large.
-summarizer.terminate();
-```
diff --git a/toolkit/components/ml/docs/index.rst b/toolkit/components/ml/docs/index.rst
new file mode 100644
index 0000000000..a171a982a3
--- /dev/null
+++ b/toolkit/components/ml/docs/index.rst
@@ -0,0 +1,47 @@
+Machine Learning
+================
+
+This component is an experimental machine learning local inference engine based on
+Transformers.js and the ONNX runtime.
+
+In the example below, an image is converted to text using the `image-to-text` task.
+
+
+.. code-block:: javascript
+
+ const {PipelineOptions, EngineProcess } = ChromeUtils.importESModule("chrome://global/content/ml/EngineProcess.sys.mjs");
+
+ // First we create a pipeline options object, which contains the task name
+ // and any other options needed for the task
+ const options = new PipelineOptions({taskName: "image-to-text" });
+
+ // Next, we create an engine parent object via EngineProcess
+ const engineParent = await EngineProcess.getMLEngineParent();
+
+ // We then create the engine object, using the options
+ const engine = engineParent.getEngine(options);
+
+ // At this point we are ready to do some inference.
+
+ // We need to get the image as an array buffer and wrap it into a request object
+ const response = await fetch("https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg");
+ const buffer = await response.arrayBuffer();
+ const mimeType = response.headers.get('Content-Type');
+ const request = {
+ data: buffer,
+ mimeType: mimeType
+ };
+
+ // Finally, we run the engine with the request object
+ const res = await engine.run(request);
+
+ // The result is a string containing the text extracted from the image
+ console.log(res);
+
+
+Supported Inference Tasks
+:::::::::::::::::::::::::
+
+The following tasks are supported by the machine learning engine:
+
+.. js:autofunction:: imageToText