EMG + EMS: Tracking and Stimulating Movement
Eletrical Muscle Stimulation (EMS) has become popular in HCI. The concept is that by applying a small electrical signal to a muscle you can cause that muscle to contract, causing movement. In HCI, we do this through sticky, external electrodes. By using electrodes on the forearm, you can cause fingers to move and the hand to pivot around the wrist.
Currently, there are a number of challenges in using systems like this, including targeting the exact movement you want, isolating specific movements (as not to end up with additional, simultaneous, unwanted movements), and setting up the system (which currently requires an expert). I am working to overcome these challenges, combining EMS (muscle writing, per se), with EMG (muscle reading). This enables us to automatically setup the system and learn which electrodes to target to achieve desired movements.
My work here involves novel hardware designing and building, writing complex control software, deploying machine learning techniques to map collected movement data to stimulation patterns, and performing quantitative explorations of user movement patterns.
Veritaps: Detecting Lying on Mobile Devices
We use mobile devices for increasing amounts of our communication. Previous work has shown that we can use the sensors on these devices, alongside machine learning models, to detect stress, boredom, and other emotional states. But can we detect lying with these sensors? Language processing allows us to identify lying in longer pieces of text, but mobile communication often involves short responses, emoji, or just box tapping. So can we detect lying for individual swipes and taps? We built three custom apps and gathered a lot of user data through Amazon's Mechanical Turk (online crodworking platform). We found that truths can be easily identified, while lies are harder to identify. But the results do allow labelling communications with prompts when it cannot be identified as immediately true, encouraging the recipient to double check, or ask further questions.
PowerShake: Mobile Power Sharing
Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, such as cameras (supporting different functionality), may have sufficient battery life to enable this task. We explored Wireless Power Transfer (WPT) between mobile devices; making power a shareable commodity. The transfer is simple to perform, can easily fit within existing devices, and is compliant with European Safety Standards.
PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements.
This work involved building custom power transfer circuitry, building FEM models to determine the effect of any electrical radiation on the user, and conducting workshops and qualitative analysis.
Smart Makerspace: In-place Tutorials for the Novice Maker
Getting started in 'making' can be complicated, with a myriad of new tools and techniques to learn. Indeed, learning novel maker skills share a range of commonalities with learning new software skills, such as uncertainty over tool location, the correct ways to use tools, and the steps required to achieve a goal. Novice makers are supported online by a wealth of resources and tutorials, but these often lack context and can be hard to follow. We draw upon the lessons learned from complex software tutorials, and integrate the tutorial into the physical maker environment.
We designed a novel smart workbench with in-built activity tracking, instrumented power tools to provide guidance and usage cues, and augmented a toolbox to ease tool selection.
VideoHandles: Exploring Action Camera Footage by Repeating Gestures
VideoHandles is a novel interaction technique for searching through action-camera (e.g. GoPro) video collections. Action-cameras are designed to be mounted, switched on and then ignored as they record the entirety of the wearer’s chosen activity. This results in a large amount of footage that may only include a small number of interesting moments. When reviewing the footage later, these interesting moments are hard to locate.
VideoHandles presents a novel solution to this problem, enabling the wearer to search through the footage by replaying actions they performed during the initial capture. Users can perform gestures to specifically mark moments of interest or can use their domain knowledge, such as diving gestures, to re-locate moments.
We built custom computer vision algorithms and ran quantitative explorations to determine the usability and applicability of the concept.
SensaBubble is a mid-air display system that generates scented bubbles to deliver information to the user via both sight and smell. The system reliably produces single bubbles of specific sizes along a directed path. Each bubble produced by SensaBubble is filled with fog containing a scent relevant to the notification. A visual display is projected onto the bubble, which endures until it bursts, then a scent within the bubble is released, leaving a longer-lasting trace of the event.
Personal 3D Scanning in Archaeology
Archaeology, as a process, cannot be repeated. For this reason, it is especially important to capture detailed, accurate records of the ongoing site-work. Due to time constraints and differing skillsets, the data capture is typically conducted by a different team to the excavation. This separation between excavation and capture can create problems, whereby the capture team does not capture information that is formulating the excavators thinking and thus their ongoing practice. Furthermore, current capture processes, especially those using 3D techniques, are time consuming and disruptive to the exavators.
By working closely with a group of archaeologists, we designed and prototyped personal 3D capture devices. These devices, based around a wireless Kinect, could instantaneously capture 3D models, allowing the individual excavators to quickly incorporate 'capture' into their ongoing work.
The Dream is Collapsing: the Experience of Exiting VR, to appear in CHI 2018. Award: Honorable Mention (top 5% of ~650 papers).
I Really Did That: Sense of Agency with Touchpad, Keyboard and On-skin Interaction, to appear in CHI 2018
Veritaps: Detecting Truth from Mobile Interaction, to appear in CHI 2018
zPatch: Hybrid Capacitive/Resistive eTextile Sensors, to appear in TEI 2018
Wanding Through Space: Exploring Electric Muscle Stimulation, Augmented Human (AH) 2018
Automatic Calibration of High Density Electric Muscle Stimulation, ACM IMWUT, September 2017 (Presented at Ubicomp 2017)
A garment fabric for reading and writing muscle activity, European Patent App, 17188784.7
Invisiboard: Maximizing Display and Input Space with a Full Screen Text Entry Method for Smartwatches, Mobile HCI 2016
Mobile Energy Sharing Futures, Mobile HCI Workshop 2016
PowerShake: Wireless Power Transfer between Mobile Devices, CHI 2016
Smart Makerspace: An Immersive Instructional Space for Physical Tasks, ITS 2015
Smart Tools and Workspaces for do-it-yourself tasks, US Patent App, 14/968,767
Juggling the Effects of Latency: Software Approaches to Minimize Latency in Projector Camera Systems, UIST Poster 2015
VideoHandles: Searching through Action Camera Videos by Replicating Hand Gestures, Journal of Computers and Graphics 2015
Resonant Bits: Controlling Digital Musical Instruments with Resonance and the Ideomotor Effect, NIME 2015
TellTale: Adding a Polygraph to Everyday Life, in CHI EA, 2015
The Cage: Towards a 6DoF Remote Control with Force Feedback for UAV Interaction, in CHI EA, 2015
The Camera "at the Trowel's Edge": Personal Video Recording in Archaeological Research, Journal of Archaeological Method and Theory, 23, 2015
Juggling the Effects of Latency: Motion Prediction Approaches to Reducing Latency in Dynamic Projector-Camera Systems, Microsoft Technical Report
VideoHandles: Replicating Gestures to Search Through Action-Camera Video, SUI 2014. Award: Honorable Mention
Wubbles: A Colloborative Ephemeral Musical Instrument, in NIME Demonstrations 2014
Latency Reduction in Camera-Projection Systems, US Patent 14/202,719
SensaBubble: A Chrono-Sensory Mid-Air Display of Sight and Smell, CHI 2014. Award: Best Paper
Extending Interaction for Smart Watches: Enabling Bimanual Around Device Control, CHI Extended Abstract 2014.
ReflectoSlates: Personal Overlays for Tabletops Combining Camera-Projector Systems and RetroReflective Materials, in CHI Extended Abstract 2014.
Quick and Dirty: Streamlined 3D Scanning in Archaeology, CSCW 2014.
Research Chronotopes: Investigating the time-space of archaeological excavation, CHI ’13 Workshop
HCI Community Activities
CHI 2018 AC, User Experience and Usability sub-committee.
CHI 2017 AC, Engineering Interactive Systems and Technologies sub-committee.
UIST 2016 Video Previews Co-Chair. Trailer.
CHI 2015 Student Volunteer.
CHI 2014 Student Volunteer.
CHI, TEI, UIST Reviewer, since 2013.
Sussex University, June 2016: Electrical Muscle Stimulation, Ghosts and Illusions.
Research Internship with Tovi Grossman, Autodesk Research, Toronto
Research Internship with Hrvoje Benko, MSR, Redmond
Kenton O’Hara and Shahram Izadi, Microsoft Research, UK - Quick and Dirty... paper, CSCW 2014
EPSRC Sandbox Project, 'Patina', Universities of Swansea, Brighton, Southampton, Newcastle, and MSR UK.
HCI for Magicians, Watershed Bristol (local creative hub)
PhD, Thesis Title: Designing for Embodied Reflection, BIG Lab, Bristol University
Masters of Engineering (suma cum laude), Computer Science, University of Bristol