I am a Lecturer in the Faculty of Information Technology at Monash University, Australia. I work within CHIC - the Computer Human Interaction and Creativity section.

Prior to this, I was a post doc in Human Centered Computing at the University of Copenhagen, working with Kasper Hornbaek on the BodyUI ERC (European Research Council) project. We researched body-based user interfaces, exploring Electrical Muscle Stimulation (EMS), sense of agency, and mobile interaction.

I received my PhD from the Interaction Group at the University of Bristol. My PhD Thesis, Designing for Embodied Reflection, was completed with supervision from Mike Fraser. During my time in Bristol, I also worked closely with Sri Subramanian.

My research focuses on bringing emergent technologies into the real world; by (1) exploring user experience, (2) technology development, and (3) domain exploration. My work combines hardware and software development, with quantitative and qualitative studies and analysis. My research strength lies in my use of mixed methodologies to support and develop understanding of complex phenomena and technologies.

Email: Jarrod[dot]Knibbe[at]monash[dot]edu | jarrodknibbe[at]gmail(dot)com | jarrod[at]di(dot)ku(dot)dk


Projects

EMG + EMS: Tracking and Stimulating Movement




By applying a small electrical signal to a muscle you can cause that muscle to contract, causing movement. We call this Electric Muscle Stimulation (EMS). Through EMS, we can give control of the body to the computer.

Currently, there are a number of challenges in using systems like this, such as causing the exact movement that you want. I am working to overcome these challenges, combining EMS (muscle writing, per se), with EMG (muscle reading). This enables us to automatically setup the system and learn which electrodes to target to achieve desired movements.

Veritaps: Detecting Lying on Mobile Devices





We use mobile devices for increasing amounts of our communication. Previous work has shown that we can use the sensors on these devices, alongside machine learning models, to detect stress, boredom, and other emotional states. But can we detect lying with these sensors? Specifically, can we detect lying for individual swipes and taps? We built three custom apps and gathered a lot of user data through Amazon's Mechanical Turk (online crowdworking platform). We found that truths can be easily identified, while lies are harder to identify.

PowerShake: Mobile Power Sharing



Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, such as cameras (supporting different functionality), may have sufficient battery life to enable this task. We explored Wireless Power Transfer (WPT) between mobile devices; making power a shareable commodity. The transfer is simple to perform, can easily fit within existing devices, and is compliant with European Safety Standards.

PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements.

This work involved building custom power transfer circuitry, building FEM models to determine the effect of any electrical radiation on the user, and conducting workshops and qualitative analysis.

Smart Makerspace: In-place Tutorials for the Novice Maker



Getting started in 'making' can be complicated, with a myriad of new tools and techniques to learn. Indeed, learning novel maker skills share a range of commonalities with learning new software skills, such as uncertainty over tool location, the correct ways to use tools, and the steps required to achieve a goal. Novice makers are supported online by a wealth of resources and tutorials, but these often lack context and can be hard to follow. We draw upon the lessons learned from complex software tutorials, and integrate the tutorial into the physical maker environment.

We designed a novel smart workbench with in-built activity tracking, instrumented power tools to provide guidance and usage cues, and augmented a toolbox to ease tool selection.

VideoHandles: Exploring Action Camera Footage by Repeating Gestures



VideoHandles is a novel interaction technique for searching through action-camera (e.g. GoPro) video collections. Action-cameras are designed to be mounted, switched on and then ignored as they record the entirety of the wearer’s chosen activity. This results in a large amount of footage that may only include a small number of interesting moments. When reviewing the footage later, these interesting moments are hard to locate.

VideoHandles presents a novel solution to this problem, enabling the wearer to search through the footage by replaying actions they performed during the initial capture. Users can perform gestures to specifically mark moments of interest or can use their domain knowledge, such as diving gestures, to re-locate moments.

We built custom computer vision algorithms and ran quantitative explorations to determine the usability and applicability of the concept.

SensaBubble




SensaBubble is a mid-air display system that generates scented bubbles to deliver information to the user via both sight and smell. The system reliably produces single bubbles of specific sizes along a directed path. Each bubble produced by SensaBubble is filled with fog containing a scent relevant to the notification. A visual display is projected onto the bubble, which endures until it bursts, then a scent within the bubble is released, leaving a longer-lasting trace of the event.

Personal 3D Scanning in Archaeology

Archaeology, as a process, cannot be repeated. For this reason, it is especially important to capture detailed, accurate records of the ongoing site-work. Due to time constraints and differing skillsets, the data capture is typically conducted by a different team to the excavation. This separation between excavation and capture can create problems, whereby the capture team does not capture information that is formulating the excavators thinking and thus their ongoing practice. Furthermore, current capture processes, especially those using 3D techniques, are time consuming and disruptive to the exavators.

By working closely with a group of archaeologists, we designed and prototyped personal 3D capture devices. These devices, based around a wireless Kinect, could instantaneously capture 3D models, allowing the individual excavators to quickly incorporate 'capture' into their ongoing work.


Publications

2018

2017

2016

Publication
Publication

2015

Publication
Publication
Publication
Publication
Publication

2014

Publication
Publication
Publication
Publication
Publication
Publication

2013


CV

PDF

CV

HCI Community Activities

CHI 2018 AC, User Experience and Usability sub-committee.

CHI 2017 AC, Engineering Interactive Systems and Technologies sub-committee.

UIST 2016 Video Previews Co-Chair. Trailer.

CHI 2016 Video Previews Co-Chair. Trailers: Short Version or Long Version.

CHI 2015 Student Volunteer.

CHI 2014 Student Volunteer.

CHI, TEI, UIST Reviewer, since 2013.

Internships

Research Internship with Tovi Grossman, Autodesk Research, Toronto

Research Internship with Hrvoje Benko, MSR, Redmond

Education

PhD, Thesis Title: Designing for Embodied Reflection, BIG Lab, Bristol University

Masters of Engineering (suma cum laude), Computer Science, University of Bristol