Skip to content

Latest commit

 

History

History
52 lines (36 loc) · 1.92 KB

README.md

File metadata and controls

52 lines (36 loc) · 1.92 KB

PoseNet Plus

Extracts head pose data (pitch, yaw, roll) from PoseNet estimations and optionally moves a pointer/mouse cursor on the screen 🐭

Coming soon


Local Development

Requirements

First, install NodeJS and then run npm install -g parcel-bundler to install ParcelJS. Then, from this projects root directory run npm install to download all the depedencies. Once that's done, you'll have the following scripts from the project root:

# Start a local development server on http://localhost:1234
npm start

Instantiating

// These are the config defaults, pass null to just use these
const posenetCursor = new PoseNetCursor({
  // Confidence needed for a keypoint to be registered
  confidence: 0.75,
  training: {
    size: 100
  },
  imageScaleFactor: 0.5,
  outputStride: 16
})

Why was this started?

I want to build a Chrome Extension that lets people with disabilities browse the web handsfree using head tracking. To keep things optimized and file sizes low, I'd like to build everything around TensorFlow.js

PoseNet ported over to TensorFlow.js

The TensorFlow team is continuously releasing and maintaining a library of deep learning models, one of them being PoseNet. PoseNet is a very lightweight, very fast human pose estimator that can track many people simultaneously.

I believe that we can leverage PoseNet to help us quickly create customizable, handsfree interfaces for the web. By creating a library we leave open the possibility of porting the project to different platforms.


Sources