Model: TensorFlow Handpose

A person holding their hands up towards webcam, with a green hand skeleton overlaid on top of the hands in 3D
handsfree.update({handpose: true})

This model includes a fingertip raycaster, center of palm object, and a minimal THREE environment which doubles as a basic debugger for your project.

Usage

With defaults

handsfree = new Handsfree({handpose: true})
handsfree.start()

With config

handsfree = new Handsfree({
  handpose: {
    enabled: true,

    // The backend to use: 'webgl' or 'wasm'
    // 🚨 Currently only webgl is supported
    backend: 'webgl',

    // How many frames to go without running the bounding box detector. 
    // Set to a lower value if you want a safety net in case the mesh detector produces consistently flawed predictions.
    maxContinuousChecks: Infinity,

    // Threshold for discarding a prediction
    detectionConfidence: 0.8,

    // A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]
    iouThreshold: 0.3,

    // A threshold for deciding when to remove boxes based on score in non-maximum suppression.
    scoreThreshold: 0.75
  }
})

Data

A diagram showing all the landmarks of the Handpose model

// Get the [x, y, z] of various landmarks
// Thumb tip
handsfree.data.handpose.landmarks[4]
// Index fingertip
handsfree.data.handpose.landmarks[8]

// Normalized landmark values from [0 - 1] for the x and y
// The z isn't really depth but "units" away from the camera so those aren't normalized
handsfree.data.handpose.normalized[0]

// How confident the model is that a hand is in view [0 - 1]
handsfree.data.handpose.handInViewConfidence

// The top left and bottom right pixels containing the hand in the iframe
handsfree.data.handpose.boundingBox = {
  topLeft: [x, y],
  bottomRight: [x, y]
}

// [x, y, z] of various hand landmarks
handsfree.data.handpose.annotations: {
  thumb: [...[x, y, z]], // 4 landmarks
  indexFinger: [...[x, y, z]], // 4 landmarks
  middleFinger: [...[x, y, z]], // 4 landmarks
  ringFinger: [...[x, y, z]], // 4 landmarks
  pinkyFinger: [...[x, y, z]], // 4 landmarks
  palmBase: [[x, y, z]], // 1 landmarks
}

Examples of accessing the data

handsfree = new Handsfree({handpose: true})
handsfree.start()

// From anywhere
handsfree.data.handpose.landmarks

// From inside a plugin
handsfree.use('logger', data => {
  if (!data.handpose) return

  console.log(data.handpose.boundingBox)
})

// From an event
document.addEventListener('handsfree-data', event => {
  const data = event.detail
  if (!data.handpose) return

  console.log(data.handpose.annotations.indexFinger)
})

Three.js Properties

The following helper Three.js properties are also available:

// A THREE Arrow object protuding from the index finger
// - You can use this to calculate pointing vectors
handsfree.model.handpose.three.arrow
// The THREE camera
handsfree.model.handpose.three.camera
// An additional mesh that is positioned at the center of the palm
// - This is where we raycast the Hand Pointer from
handsfree.model.handpose.three.centerPalmObj
// The meshes representing each skeleton joint
// - You can tap into the rotation to calculate pointing vectors for each fingertip
handsfree.model.handpose.three.meshes[]
// A reusable THREE raycaster
// @see https://threejs.org/docs/#api/en/core/Raycaster
handsfree.model.handpose.three.raycaster
// The THREE scene and renderer used to hold the hand model
handsfree.model.handpose.three.renderer
handsfree.model.handpose.three.scene
// The screen object. The Hand Pointer raycasts from the centerPalmObj
// onto this screen object. The point of intersection is then mapped to
// the device screen to position the pointer
handsfree.model.handpose.three.screen

Projects

The following projects all use TensorFlow Handpose, however, they weren’t all necessarily done with Handsfree.js:

No results found.

Menu