FEEDBACK LOOPS AT SCIENCE IN THE CITY FESTIVAL
Produced by Alina Ivan, a researcher at Kings College London, Institute of Psychiatry, the contemporary dance piece, Feedback Loops, promised a collaborative piece inspired by live body data taken from wearable devices with themes of expressing the entanglements of neurological conditions through movement. Performed as part of Malta’s Science in the City Festival (streamed worldwide), in connection with RADAR-CNS, this small project had fingers in many pies. Here, our neuroscience editor, Abhrajeet Roy, speaks to Ivan about the inspiration, research, and technicalities behind the project and I share some thoughts on the dance performance.
Abhrajeet Roy: Could you expand a bit on the broader goals of the RADAR-CNS study and the early phases of the Feedback Loops project? Was this advertised to patients as a research study, an art project, or something entirely different?
Alina Ivan: The RADAR-CNS study (https://www.radar-cns.org/) aims to identify digital biomarkers of depression, epilepsy and multiple sclerosis in order to predict symptoms of these conditions. It does so based on a combination of continuous passive data streams (e.g. sleep time, GPS data, the number of calls and texts) and more traditional measures (e.g. questionnaires, cognitive tests) administered over time. For example, in the case of depression, more missed calls and more time spent at home could indicate social withdrawal, which can be an indicator of the worsening of depression symptoms.
I've worked on 3-minute animation that explains the study and how it may fall into a larger historical picture https://www.youtube.com/watch?v=ImoT4RVD-sU&t=0s&ab_channel=kingscollegelondon and there is more information about the study protocol here: https://www.youtube.com/watch?v=pgI02uOLUAY&t=9s&ab_channel=kingscollegelondon
RADAR-CNS was advertised to patients as a research study, whereas Feedback Loops was advertised to the Patient Advisory Board (people with lived experiences of these illnesses) and the rest of the team as an interdisciplinary public engagement project. The dancer, the musician, and I held interviews with people with lived experiences with these illnesses. The patients kindly shared their experiences and offered them to inform the development of Feedback Loops storyline, choreography and music. I believe that it's important to respect and learn from other's lived experiences, which is why their stories are at the very heart of performance. Feedback Loops also contains quotes collected as part of the RADAR participant interviews.
AR: Could you clarify what different biosignals were being recorded? Some of these signals are inherently more oscillatory than others (ie. pulse versus sweating) - how does that play into the music generation? Are certain signals more useful for bass/rhythm/etc?
AI: Acceleration is for choosing notes as it’s the most visual of the data sources, sweat is used for changing the sound because it’s a slowly changing data source so keeps the music from being too irrational and the pulse is used for probability of musically timed notes, as it’s a rhythmically moving data source.
AR: And what happens to the data during each performance? Is it recorded for any sort of offline analysis?
AI: We have the data recorded and available for future research. We are currently applying for ethics at King's College to analyse the data from the Impact Evaluation Form https://kclbs.eu.qualtrics.com/jfe/form/SV_4SkRK2pY3jJ6cnP?fbclid=IwAR1s7o-N74LBMWPjYH2pgIXdGsgWQXTDQ0WwA4NglGsWFPERwCsNvg6tyLg. It would be interesting to see to what extent the physiological data of the dancer relates to the audience's data, which would involve running an experiment. The Empatica E4 can be used as an implicit measure of audience arousal during the performance (Vicary et al., 2017), so it would be interesting to distribute a few E4s among audience members to learn more about how they perceive the performance! Also, synchronicity between the dancer's data and the audience's data could indicate rapport and empathy (Lumsden, Miled & Mcrae, 2014). However, we'll have to see if we have the resources to explore these questions further - fingers crossed!
AR: Your team mentioned a goal of having multiple dancers/players, each with their own wristband. What about multiple wristbands per individual? Is there a practical limit to how much data can reasonably be processed dynamically during a dance performance?
AI: I like the idea of having multiple devices per individual!! The accelerometer (movement) data only reflects movements made with the arm where the device is worn. On one hand, this makes it easier to spot how the accelerometer data is related to the sound. On the other hand, having three additional devices - one around the arm and one around each ankle - could allow the movements of different body parts to come into focus through the music at different times. This could bring the music even closer to the dance!
AR: You have described how, for this project, an Empatica E4 wristband was used, but for the broader RADAR-CNS study there are a range of commercial sensors available to use. What were the key features of the E4 which led your team to use it for Feedback Loops?
AI: We have chosen the Empatica E4 (used in the RADAR Epilepsy Study) over other devices used in the RADAR Project for several reasons.
One is that it allows for continuous real-time streaming of the raw data. The FitBit Charge 3, which is used in the Radar Depression Study, does not allows this. The FitBit data is only shown after a manual synchronization with the FitBit app.
Also, the data sampling rate is not good with the FitBit as it is with the E4 [(E4s' PPG sensor has a samplig rate 64 Hz (64 times per second) while the Fitbit sampling rate from 2 to 30 Hz].
The Empatica also collects more data compared to the FitBit. It has sensors for electrodermal activity and temperature in addition to the accelerometer and the photoplethysmography. When transposed into music, these additional data can make the performance more intimate!
AR: Music generated purely from biosignals is a fascinating project. Is any machine learning/AI involved in the real-time data processing? Are these music-generating algorithms adaptive at all, or is it more of a parameter space that is modulated during each performance based on the biodata?
AI: I'm glad you liked the idea! There is no machine learning or AI involved. There is a direct connection between the dancer’s body and the notes and sounds generated. The algorithm is not adaptive; it chooses what happens within a set of variables, behaving differently depending on the biometric data that comes through. The music is therefore based on a mix between artistic interpretation of patient's stories and the biofeedback (see picture attached).
AR: And, do you have plans to further integrate this type of work with virtual or augmented reality?
AI: We hope that the binaural (3D sounds) make the performance more immersive and also reveal subtle aspects of the condition (e.g., the patient with epilepsy experienced flashing lights around the pario-occipital area during a seizure, which is where the sound is the musical piece is directed from, as well as the light).
VR would be an exciting new avenue to explore, for sure! But first, we're looking forward to doing the performance in real life! For now, we are planning to use snippets of the performance (e.g. the depression section) to create individual videos that we can share online. The aim of these is to raise awareness of the invisible symptoms of these three conditions and promote understanding about how the conditions may look and feel.
The data from the performance was recorded for further research and, in future, the team hope to discover the extent to which the dancer’s physiological relates to the audience’s data, involving a running experiment to study and measure audience arousal (Vicary et al., 2017) using Empatica E4 and further, try to discover rapport and empathy between the two parties (Lumsden, Miled & Mcrae, 2014).
PERFORMANCE REVIEW
One of the key elements of the live-performance element of the show was the use of the dancer’s body in the live moment-to-moment playback of its soundtrack, which could have resulted in thoroughly unmelodic and inharmonious accompaniment. However, the constantly evolving tune was thoroughly textured and enhanced the personal element of the dance and its parts, a testament to the dancer and choreographer, Anna Spink. Divided into sections, this dance represented several conditions including Depression, Multiple Sclerosis, and Epilepsy. But how did her body ‘make’ the music and to what extent did this ‘call and response’ communication play out in real-time?
This data included taking her blood volume pulse, which is meant to control the probability and rhythm of notes being played and their volume, and the electrodermal activity (sweat production) influenced the dynamicity of the notes expressed. As the most visual data source, speed and acceleration of movement chose which notes were played. The Empatica E4 wristband (which is used by RADAR-CNS) was Alina’s choice of monitor and allowed for real-time streaming of raw data of electrothermal activity and temperature in addition to its accelerometer (movement data taken from the arm it’s attached to) and photoplethysmography. The sensor’s sampling rate varying at 64 Hz (64 times per second) being considerably preferable to the more commonly known FitBit wearable tech software which has a more limited sampling rate range, from 2 to 30Hz, and which only provides accessible data after use.
The musician of the piece, Dan Wimperis, stated, ‘the sonification is [in] two parts, a Python script that converts the live data into musical notes and controls, and a music project that takes in the musical notes and controls and generates the sounds based on the data.’ The result was something far more accomplished and more actualised in melody than one might expect. In fact, it resembled many contemporary dance accompaniment tracks and allowed the viewer to forget that the music was created live on stage with every move she made.
For this reason, it could have been interesting to have seen fewer choreographic motifs and more experimentation. Though the piece was previously choreographed before the performance, it would have been interesting to see just how far her improvisation would have taken her musical accompaniment, and visa-versa. A live experiment on stage from both artists would have made for a truly thrilling interchange. An element of the performance which, perhaps, detracted from the dramatic immersion of the piece was the projected quotes of research patients for the study shown in conjunction to the performance which, although made the audience aware of the source of the movement, made for a slightly less integrated message, and brought the viewer out of the moment, distracting the eye from the continuous movement and storytelling.
Nevertheless, it was a fascinating and innovative performance. Influences seem to have been taken from street-contemporary styles such as Tutting, floor work reminiscent of Martha Graham’s explorations of the spine, and emphasized articulation of the hands and feet and their connection to the floor. The confined stage and spotlight also shone a light on the alienating experience of the conditions being explored, and the use of breath to separate the sections felt like a gentle and appropriate transition. And the style represented made truly meaningful and illuminating strides towards communicating the complex emotions of experiencing a debilitating condition in a visceral and affecting way. We look forward to hearing more about this brilliant project, the research team at King’s College London, and seeing more of Anna Spink’s work in future.
For more about Feedback Loops, please visit: https://www.kcl.ac.uk/news/interdisciplinary-dance-performance-feedback-loops-highlight-of-science-in-the-city-malta-festival
All images shown courtesy of Kings College London and © 2021 Chris Scott Studio