Model: TensorFlow Handpose

A person holding their hands up towards webcam, with a green hand skeleton overlaid on top of the hands in 3D
handsfree.update({handpose: true})

This model includes a fingertip raycaster, center of palm object, and a minimal THREE environment which doubles as a basic debugger for your project.


With defaults

handsfree = new Handsfree({handpose: true})

With config

handsfree = new Handsfree({
  handpose: {
    enabled: true,

    // The backend to use: 'webgl' or 'wasm'
    // 🚨 Currently only webgl is supported
    backend: 'webgl',

    // How many frames to go without running the bounding box detector. 
    // Set to a lower value if you want a safety net in case the mesh detector produces consistently flawed predictions.
    maxContinuousChecks: Infinity,

    // Threshold for discarding a prediction
    detectionConfidence: 0.8,

    // A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]
    iouThreshold: 0.3,

    // A threshold for deciding when to remove boxes based on score in non-maximum suppression.
    scoreThreshold: 0.75


A diagram showing all the landmarks of the Handpose model

// Get the [x, y, z] of various landmarks
// Thumb tip[4]
// Index fingertip[8]

// Normalized landmark values from [0 - 1] for the x and y
// The z isn't really depth but "units" away from the camera so those aren't normalized[0]

// How confident the model is that a hand is in view [0 - 1]

// The top left and bottom right pixels containing the hand in the iframe = {
  topLeft: [x, y],
  bottomRight: [x, y]

// [x, y, z] of various hand landmarks {
  thumb: [...[x, y, z]], // 4 landmarks
  indexFinger: [...[x, y, z]], // 4 landmarks
  middleFinger: [...[x, y, z]], // 4 landmarks
  ringFinger: [...[x, y, z]], // 4 landmarks
  pinkyFinger: [...[x, y, z]], // 4 landmarks
  palmBase: [[x, y, z]], // 1 landmarks

Examples of accessing the data

handsfree = new Handsfree({handpose: true})

// From anywhere

// From inside a plugin
handsfree.use('logger', data => {
  if (!data.handpose) return


// From an event
document.addEventListener('handsfree-data', event => {
  const data = event.detail
  if (!data.handpose) return


Three.js Properties

The following helper Three.js properties are also available:

// A THREE Arrow object protuding from the index finger
// - You can use this to calculate pointing vectors
// The THREE camera
// An additional mesh that is positioned at the center of the palm
// - This is where we raycast the Hand Pointer from
// The meshes representing each skeleton joint
// - You can tap into the rotation to calculate pointing vectors for each fingertip
// A reusable THREE raycaster
// @see
// The THREE scene and renderer used to hold the hand model
// The screen object. The Hand Pointer raycasts from the centerPalmObj
// onto this screen object. The point of intersection is then mapped to
// the device screen to position the pointer


The following projects all use TensorFlow Handpose, however, they weren’t all necessarily done with Handsfree.js:

No results found.