1 00:00:00,000 --> 00:00:13,418 *rC3 postroll music* 2 00:00:13,418 --> 00:00:19,450 Herald: Now, imagine a stage with an artist performing in front of a crowd. 3 00:00:19,450 --> 00:00:25,260 Is there a way to measure and even quantify the shows impact on the spectators? 4 00:00:25,260 --> 00:00:30,200 Kai Kunze is going to address this question in his talk Boiling Minds now. 5 00:00:30,200 --> 00:00:33,200 Kai, up to you. 6 00:00:33,200 --> 00:00:38,720 Kai: Thanks a lot for the introduction, but we have a short video. I hope 7 00:00:38,720 --> 00:00:47,535 that can be played right now. 8 00:00:47,535 --> 00:01:09,089 *intense electronic staccato music* 9 00:01:09,089 --> 00:01:28,105 *music shifts to include softer piano tones* 10 00:01:28,105 --> 00:02:04,861 *music shifts again to include harp-like tones* 11 00:02:04,861 --> 00:02:10,725 *music keeps gently shifting* 12 00:02:10,725 --> 00:02:24,894 *longer drawn out, slowly decreasing pitch* 13 00:02:24,894 --> 00:02:40,095 *shift towards slow, guitar-like sounds* 14 00:02:40,095 --> 00:03:03,653 *with light crackling noises* 15 00:03:03,694 --> 00:03:25,015 *music getting quieter, softer* 16 00:03:25,015 --> 00:03:36,225 *and fades away* 17 00:03:58,155 --> 00:04:15,296 *inaudible talking* 18 00:04:15,296 --> 00:04:22,080 Kai: So thanks a lot for the intro and this is the Boiling Mind talks or linking 19 00:04:22,080 --> 00:04:30,480 physiology and choreography. I just started off with this short video, that could 20 00:04:30,480 --> 00:04:36,990 give you an overview over the experience of this dance performance that we 21 00:04:36,990 --> 00:04:44,520 staged in Tokyo beginning of the year, just before the lockdown, actually. 22 00:04:44,520 --> 00:04:52,640 And the idea behind this was: we wanted to put the audience on a stage. So breaking the 23 00:04:52,640 --> 00:04:59,920 fourth wall. Trying to use physiological sensing in the audience. And that change 24 00:04:59,920 --> 00:05:08,640 then is reflected on stage over the projection, sound and also audio to 25 00:05:08,640 --> 00:05:14,960 influence the dancers and performers and then, of course, feed them back again to 26 00:05:14,960 --> 00:05:22,880 the audience. So creating an augmented feedback loop. In his talk today, I just 27 00:05:22,880 --> 00:05:28,000 want to give you a small overview, a little bit about the motivation, why I 28 00:05:28,000 --> 00:05:36,160 thought it's a nice topic for the remote experience from the Chaos Computer Club 29 00:05:36,160 --> 00:05:41,120 and also a little bit more about the concept, the set up and the design 30 00:05:41,120 --> 00:05:48,720 iterations, as well as the lessons learned. So for me to give this talk, 31 00:05:48,720 --> 00:05:55,920 I thought it's a good way to exchange expertise and get a couple of people that 32 00:05:55,920 --> 00:06:01,600 might be interested for the next iterations, because I think we are still 33 00:06:01,600 --> 00:06:06,960 not done with this work. So it's still kind of work in progress. And also a way 34 00:06:06,960 --> 00:06:12,160 to share data. So to do some explorative data analysis on the recorded performances 35 00:06:12,160 --> 00:06:19,360 that we have. And then most important: I wanted to create a more creative way to 36 00:06:19,360 --> 00:06:25,760 use physiological data and explore it, because also for me as a researcher 37 00:06:25,760 --> 00:06:31,920 working on variable computing or activity recognition, often we just look into 38 00:06:31,920 --> 00:06:39,326 recognizing or predicting certain motions or certain mental states. 39 00:06:39,326 --> 00:06:47,519 And that kind of, at least for simple things, feeds back into this very - I think - 40 00:06:47,519 --> 00:06:55,190 idiotic or stupid ideas of surveillance and applications cases and that. 41 00:06:55,190 --> 00:07:01,330 So can we create more intuitive ways to use physiological data? 42 00:07:01,330 --> 00:07:04,400 So from a concept perspective, I think the 43 00:07:04,400 --> 00:07:10,520 video gave a good overview of what we tried to create. However, 44 00:07:10,520 --> 00:07:17,600 what we did in 3 performances was: We used physiological sensors on all audience 45 00:07:17,600 --> 00:07:22,960 members. So for us, it was important that we are not singling out individual people 46 00:07:22,960 --> 00:07:29,760 to just get feedback from them, but have the whole response, the whole physiological 47 00:07:29,760 --> 00:07:37,440 state of the audience as an input to the performance. In that case, we actually 48 00:07:37,440 --> 00:07:45,520 used heart rate variability and also galvanic skin response as inputs. 49 00:07:45,520 --> 00:07:51,760 And these inputs then changed the projection that you could see. The lights, especially 50 00:07:51,760 --> 00:07:58,160 the intensity of the lights and also the sound. And that, again, then led to 51 00:07:58,160 --> 00:08:05,222 changes in the dancing behavior of the performers. 52 00:08:05,222 --> 00:08:10,805 For the sensing, we went with a variable set up, 53 00:08:10,805 --> 00:08:18,983 so in this case a fully wireless wristband, because we wanted to do something that is 54 00:08:18,983 --> 00:08:25,162 easy to wear and easy to put on and to put off. We had a couple of iterations on that 55 00:08:25,172 --> 00:08:32,971 and we decided then for electrodermal activity and also heart activity 56 00:08:32,971 --> 00:08:39,520 to sense, because there's some related work that link these sensors to 57 00:08:39,520 --> 00:08:45,920 engagement stress and also excitement measures. And the question then was also 58 00:08:45,920 --> 00:08:53,200 where to sense it first. We went with a couple of wrist bands and also kind of 59 00:08:53,200 --> 00:08:58,240 commercial approaches or half-commercial approaches. However, the sensing quality 60 00:08:58,240 --> 00:09:03,753 was just not good enough, especially from the wrist. You cannot really get a good 61 00:09:03,753 --> 00:09:09,040 electrodermal activity, so galvanic skin response. It's more or less a sweat 62 00:09:09,040 --> 00:09:18,640 sensor. So that means that you can detect if somebody is sweating and some of the 63 00:09:18,640 --> 00:09:25,680 sweat is actually then related to a stress response. And in that case, there are a 64 00:09:25,680 --> 00:09:30,080 couple of ways to measure that. So it could be on the lower part of your hand or 65 00:09:30,080 --> 00:09:35,280 also on the fingers. These are usually the best positions. So we used the fingers. 66 00:09:35,280 --> 00:09:42,560 Over the fingers we can also get heartrate activity. And in addition to that, there's 67 00:09:42,560 --> 00:09:48,480 also a small motion sensor, so a gyro and an accelerometer in the wristband. We haven't 68 00:09:48,480 --> 00:09:54,160 used that for the performance right now, but we still have the recordings also from the 69 00:09:54,160 --> 00:10:00,400 audience for that. When I say we, I mean George especially and also Dingding, 70 00:10:00,400 --> 00:10:05,160 2 researchers that work with me, did actually took care of the designs. 71 00:10:05,160 --> 00:10:12,087 So then the question was also how to map it to the environment or the staging. 72 00:10:12,087 --> 00:10:17,180 In this case, actually, this was done by a different team, 73 00:10:17,180 --> 00:10:20,880 this was done by the embodied media team also at KMD. 74 00:10:20,880 --> 00:10:24,740 So I know a little bit about it, but I'm definitely not an expert. 75 00:10:24,740 --> 00:10:33,360 And for the initial design we thought we use the EDA for the movement 76 00:10:33,360 --> 00:10:40,960 speed of the projection. So the EDA rate of change is matched to movement of these 77 00:10:40,960 --> 00:10:47,120 blobs that you could see or also the meshs that you can see and the color represents 78 00:10:47,120 --> 00:10:52,640 the heart rate. We went for the LFHF feature that's low frequency, high 79 00:10:52,640 --> 00:10:57,600 frequency ratio and should give you, according to related work, some indication 80 00:10:57,600 --> 00:11:04,000 about excitement. For the lights: the lights were also bound to the heart rate, 81 00:11:04,000 --> 00:11:08,720 in this case, the beats per minute, and they were matched to intensity. So if the 82 00:11:08,720 --> 00:11:13,520 beats per minute of the audience go collectively up, the light gets brighter, 83 00:11:13,520 --> 00:11:19,440 otherwise, it's dimmer. For the audio: we had an audio designer that cared about 84 00:11:19,440 --> 00:11:27,760 sounds and faded in and faded out specific sounds also related to the EDA to the 85 00:11:27,760 --> 00:11:36,320 relative rate of change of the electro- dermal activity. All this happened while 86 00:11:36,320 --> 00:11:43,760 the sensors were connected over sensing server in QT to touch designer software 87 00:11:43,760 --> 00:11:53,280 that generated this type of projections. And also the music got fed into and that 88 00:11:53,280 --> 00:11:59,200 was then controlling the feedback to the dancers. If you want to 89 00:11:59,200 --> 00:12:09,280 have a bit more of detail, I uploaded the work in progress preprint paper, a draft 90 00:12:09,280 --> 00:12:15,840 of an accepted TI paper. So in case you are interested in the mappings and the design 91 00:12:15,840 --> 00:12:20,320 decisions for the projections, there is a little bit more information there. 92 00:12:20,320 --> 00:12:26,560 I'm also happy later on to answer those questions. However, I will probably just 93 00:12:26,560 --> 00:12:31,520 forward them to the designers, that worked on them. And then, for the overall 94 00:12:31,520 --> 00:12:38,640 performance, what happened was, we started out with an explanation of the experience. 95 00:12:38,640 --> 00:12:45,576 So it was already advertised as a performance that would take in electrodermal 96 00:12:45,576 --> 00:12:52,080 activity and heartbeat activity. So, people that bought tickets or came to 97 00:12:52,080 --> 00:12:56,000 the event already had a little bit of background information. We, of course, 98 00:12:56,000 --> 00:13:00,720 made also sure that we explained at the beginning what type of sensing we will be 99 00:13:00,720 --> 00:13:09,360 using. Also what the risks and problems with these type of sensors and data 100 00:13:09,360 --> 00:13:16,000 collection is and then audience could decide, with informed consent, if they just want to 101 00:13:16,000 --> 00:13:20,240 stream the data, don't want to do anything, or they want to stream and also 102 00:13:20,240 --> 00:13:26,320 contribute the data anonymously to our research. And then when the performance 103 00:13:26,320 --> 00:13:31,970 started, we had a couple of pieces and parts, that is something that you can see in 104 00:13:31,970 --> 00:13:38,800 B, where we showed the live data feed from all of the audiences in individual tiles. We 105 00:13:38,800 --> 00:13:45,680 had that in before just for debugging, but actually the audience liked that. And so 106 00:13:45,680 --> 00:13:52,080 we made it a part of the performance, also deciding with the choreographers to 107 00:13:52,080 --> 00:13:57,840 include that. And then for the rest, as you see in C, we have the individual 108 00:13:57,840 --> 00:14:07,360 objects, these blob objects that move according to the EDA data and change colour 109 00:14:07,360 --> 00:14:16,160 based on the heart rate information. So the low to high frequency. In B, you see 110 00:14:16,160 --> 00:14:24,720 also these clouds. And yet similarly, the size is related to the heart rate data. 111 00:14:24,720 --> 00:14:33,120 And the movement again is EDA. And there's also one scene in E where the dancers pick 112 00:14:33,120 --> 00:14:39,760 one person in the audience and ask them to come on stage. And then we will display 113 00:14:39,760 --> 00:14:47,840 that audience members data at large in the back of the projection. And for the rest, 114 00:14:47,840 --> 00:14:54,560 again, we're using this excitement data from the heart rate and from the 115 00:14:54,570 --> 00:15:07,200 electrodermal activity to change sizes and colours. So, to come up with this design, we 116 00:15:07,200 --> 00:15:14,000 went the co-design route, discussing with the researchers, dancers, visual 117 00:15:14,000 --> 00:15:20,320 designers, audio designers a couple of times. And actually that's also how I got 118 00:15:20,320 --> 00:15:27,840 involved first, because the initial idea is also from Moe, the primary designer of this 119 00:15:27,840 --> 00:15:36,160 piece, were to combine somehow perception and motion. And I worked a bit in research 120 00:15:36,160 --> 00:15:41,760 with the eye tracking. So you see on the screen the pupil website eye tracker it is 121 00:15:41,760 --> 00:15:50,320 and open source eye tracking solution and also EOG electro-oculography glasses, that 122 00:15:50,320 --> 00:15:58,240 use the capacitance of your eyeballs to detect something. Rough about eye emotion. 123 00:15:58,240 --> 00:16:05,520 And we thought at the beginning, we want to combine this, a person seeing the play 124 00:16:05,520 --> 00:16:10,080 with the motions of the dancers and understand that better. So that's kind of 125 00:16:10,080 --> 00:16:21,760 how it started. The second inspiration for this idea in the theatre came from a 126 00:16:21,760 --> 00:16:29,200 visiting scholar, Jamie. Jamie Ward came over and his work with the flood theater 127 00:16:29,200 --> 00:16:34,320 in London. That's an inclusive theatre that also does workshops or Shakespeare 128 00:16:34,320 --> 00:16:40,880 workshops. And he did some sensing just with the accelerometers and gyroscopes or 129 00:16:40,880 --> 00:16:47,018 inertial motion wristbands to detect interpersonal synchrony between 130 00:16:47,018 --> 00:16:53,360 participants in these workshops. And then we thought, when he came over, we had a 131 00:16:53,360 --> 00:16:59,552 small piece where we looked into this interpersonal synchrony again in face to 132 00:16:59,552 --> 00:17:04,160 face communications. I mean, now we are remote and I'm just talking into a camera 133 00:17:04,160 --> 00:17:08,960 and I cannot see anybody. But usually, if you would have a face to face conversation, 134 00:17:08,960 --> 00:17:15,040 doesn't happen too often anymore, unfortunately. We would show some type of 135 00:17:15,040 --> 00:17:20,560 synchronies or, you know, kind of eyeblink, head nod and so on would synchronize with 136 00:17:20,560 --> 00:17:24,880 the other person, if you're talking to them. And we also showed, that in small 137 00:17:24,880 --> 00:17:30,240 recordings also we showed that we can recognize this in a variable sensing 138 00:17:30,240 --> 00:17:36,560 setup. So again, using some glasses and we thought, why don't we try to scale that 139 00:17:36,560 --> 00:17:42,400 up? Why don't we try and see what happens in a theatre performance or in another 140 00:17:42,400 --> 00:17:49,810 dance performance and see if we can recognize also some type of synchrony. And 141 00:17:49,810 --> 00:17:57,520 with a couple of ideation sessions, a couple of also test performances, also 142 00:17:57,520 --> 00:18:04,880 including dancers trying out glasses, trying out other headwear. And that was 143 00:18:04,880 --> 00:18:10,480 not really possible to use for the dancers during the performance. We came up with an 144 00:18:10,480 --> 00:18:18,640 initial prototype and that we tried out, so in, I think November 2018 or so, where 145 00:18:18,640 --> 00:18:24,320 we used a couple of pupil-labs and also pupil-invisible. These are nicer eye tracking 146 00:18:24,320 --> 00:18:27,840 glasses, they are optical eye tracking glasses, so they have small cameras in 147 00:18:27,840 --> 00:18:34,080 them, distributed in the audience. A couple of those Yoji glasses, they have also 148 00:18:34,080 --> 00:18:38,720 initial motion sensors in them. So accelerometer and gyroscope. And we had at 149 00:18:38,720 --> 00:18:46,800 the time heart rate sensors. However, they were fixed and wired to the system. And 150 00:18:46,800 --> 00:18:53,360 also the dancers wore some wristbands where we could record the motion data. And 151 00:18:53,360 --> 00:18:59,920 then what we did in these cases, then we had projections on three frames on top 152 00:18:59,920 --> 00:19:05,840 of the dancers. One was showing the blink and the headnod synchronization of the 153 00:19:05,840 --> 00:19:10,880 audience. The other one showed heart rate and variability. And the third one just 154 00:19:10,880 --> 00:19:17,280 showed raw feed from one of the eye trackers. And it looked more or less like 155 00:19:17,280 --> 00:19:23,200 this. And from a technical perspective, we were surprised because it actually worked. 156 00:19:23,200 --> 00:19:32,720 So we could stream around 10 glasses, three eye trackers and four, five, I think 157 00:19:32,720 --> 00:19:40,000 heart rate sensors at the same time and the surfer worked. However, from an audience 158 00:19:40,000 --> 00:19:45,360 perspective, a lot of the feedback was the audience didn't like that just some people 159 00:19:45,360 --> 00:19:50,240 got singled out and got the device by themselves and others could not really 160 00:19:50,240 --> 00:19:54,560 contribute and could not also see the data. And then also from a performance 161 00:19:54,560 --> 00:19:59,120 perspective, the dancers didn't really like that they couldn't interact with the 162 00:19:59,120 --> 00:20:05,600 data. The dance piece also in this case was pre-choreographed. So there was no 163 00:20:05,600 --> 00:20:11,280 possibility for the dancers to really interact with the data. And then also, 164 00:20:11,280 --> 00:20:17,120 again, from an esthetic perspective, we really didn't like that the screens were 165 00:20:17,120 --> 00:20:22,320 on top because either you would concentrate on the screens or you would 166 00:20:22,320 --> 00:20:28,160 concentrate on the dance performance. And you had to kind of make a decision also on 167 00:20:28,160 --> 00:20:33,360 what type of visualization you would focus on. So overall, you know, kind of partly 168 00:20:33,360 --> 00:20:40,000 okay, but still there were some troubles. So one was definitely we wanted to include 169 00:20:40,000 --> 00:20:48,560 all of the audience. Meaning we wanted to have everybody participate. Then the 170 00:20:48,560 --> 00:20:53,600 problem with that part was then also having enough eye trackers or having 171 00:20:53,600 --> 00:21:00,960 enough head worn devices was an issue. In addition to that, you know, kind of, if 172 00:21:00,960 --> 00:21:05,840 it's head worn some people might not like it. The pandemic hadn't started yet. When 173 00:21:05,840 --> 00:21:12,160 we did the recordings, however, there was already the information, some information 174 00:21:12,160 --> 00:21:19,120 about the virus going around. So we didn't really want as, putting everybody, 175 00:21:19,120 --> 00:21:25,840 giving everybody some eyeglasses. So then we moved to the heart rate and, galvanic 176 00:21:25,840 --> 00:21:33,440 skin response solution and the set up where the projection is now part of the 177 00:21:33,440 --> 00:21:38,320 stage. So we used the two walls, but we also used, it's a little bit hard to see 178 00:21:38,320 --> 00:21:45,360 in the images, but we also used the floor as another projection surface for the 179 00:21:45,360 --> 00:21:50,240 dancers to interact with and the main interaction, actually came then over the 180 00:21:50,240 --> 00:22:02,000 sound. So then moving over to the lessons learned. So what did we take away from 181 00:22:02,000 --> 00:22:15,280 from that experience? And the first part was talking with the dancers and talking 182 00:22:15,280 --> 00:22:21,280 with the audience often, if you saw, especially the more intricate, the more 183 00:22:21,280 --> 00:22:27,920 abstract visualizations, it was sometimes hard to interpret also how their own data 184 00:22:27,920 --> 00:22:33,760 would feed into that visualization. So, you know, kind of some audience members 185 00:22:33,760 --> 00:22:38,000 mentioned to some point in time they were not sure if they're influencing anything 186 00:22:38,000 --> 00:22:44,801 or if it had an effect on other parts, especially if they saw the live data. It 187 00:22:44,801 --> 00:22:50,165 was kind of obvious. But for future work, we really want to play more with the 188 00:22:50,165 --> 00:22:57,197 agency and also perceived agency of the audiences and the performers. And we also 189 00:22:57,197 --> 00:23:02,714 really wonder how can we measure this type of feedback loops? Because now we have 190 00:23:02,714 --> 00:23:07,331 these recordings. We looked also a little bit more into the data, but it's hard to 191 00:23:07,331 --> 00:23:16,384 understand. Were we successful? I think in some extent maybe yes, because the 192 00:23:16,384 --> 00:23:24,295 experience was fun. It was enjoyable. But on this level of, did we really create 193 00:23:24,295 --> 00:23:28,867 feedback loops and how do you evaluate feedback loops, that's something that we 194 00:23:28,867 --> 00:23:35,112 want to address in future work. On the other hand, what was surprising I 195 00:23:35,112 --> 00:23:42,054 mentioned before the raw data was something that the dancers as well as the 196 00:23:42,054 --> 00:23:48,690 audience really liked. And that was surprising for me because I thought we had 197 00:23:48,690 --> 00:23:54,273 to hide that more or less. But we had it on. As I said, there's kind of a debug at 198 00:23:54,273 --> 00:24:00,023 the beginning of some test screenings and audience members were interested in it and 199 00:24:00,023 --> 00:24:05,927 could see and were talking about: "Oh, see your heart rate is going up or your EDA is 200 00:24:05,927 --> 00:24:11,327 going up." And the dancers also like that. And we used that then in the performance 201 00:24:11,327 --> 00:24:19,663 in the three performances that we then successfully made for especially scenes 202 00:24:19,663 --> 00:24:25,468 where the dancers would interact directly with parts of the audience. At the 203 00:24:25,468 --> 00:24:32,984 beginning of the play is a scene where the dancers give out business cards to some 204 00:24:32,984 --> 00:24:38,947 audience members. And it was fun to see that some audience members could identify 205 00:24:38,947 --> 00:24:44,520 themselves, other audience members would identify somebody else that was sitting 206 00:24:44,520 --> 00:24:50,264 next to them. And then this member had a spike in EDA because of the surprise. So 207 00:24:50,264 --> 00:24:55,286 there was really, you know, kind of some interaction going on. So maybe staying if 208 00:24:55,286 --> 00:25:00,875 you're planning to do a similar event, staying close to the raw data and also low 209 00:25:00,875 --> 00:25:07,404 latency. So I think it's quite important for some types of these interactions. From 210 00:25:07,404 --> 00:25:13,534 the dancers there was a big interest, on the one side, they wanted to use the data 211 00:25:13,534 --> 00:25:20,025 for reflection. So they really liked that they had the printouts of the effects of 212 00:25:20,025 --> 00:25:27,783 the audience later on. However, they also wanted to dance more with biometric data 213 00:25:27,783 --> 00:25:33,528 and also use that for their rehearsals more. So, of course, you know, we had to 214 00:25:33,528 --> 00:25:39,275 co-design, so we worked directly. We showed the dancers the sensors and the 215 00:25:39,275 --> 00:25:43,976 possibilities and then worked with them to figure out what can work and what cannot 216 00:25:43,976 --> 00:25:49,418 work and what might have an effect, what might not have an effect. And then we did 217 00:25:49,418 --> 00:25:55,479 some, as you saw, also some prototype screenings and also some internal 218 00:25:55,479 --> 00:26:02,361 rehearsals where we used some recorded data. We used some, a couple of people of 219 00:26:02,361 --> 00:26:06,618 us were sitting in the audience. We got a couple of other researchers and also 220 00:26:06,618 --> 00:26:12,499 students involved to sit in the audience to stream data. And we also worked a 221 00:26:12,499 --> 00:26:19,970 little bit with prerecorded experiences and also synthetic experiences, how we 222 00:26:19,970 --> 00:26:25,631 envisioned that the data would move. But still, it was not enough in terms of 223 00:26:25,631 --> 00:26:32,169 providing an intuitive way to understand what is going on, especially also for the 224 00:26:32,169 --> 00:26:39,045 visualizations and the projections. They were harder to interpret than the sound in 225 00:26:39,045 --> 00:26:50,039 the sound sphere. So and then the next and the biggest point, maybe as well is, the 226 00:26:50,039 --> 00:26:55,704 sensors and the feature best practices. So we're still wondering, you know, what to 227 00:26:55,704 --> 00:27:02,739 use. We're still searching. What kind of, sensing equipment can we use to relay 228 00:27:02,739 --> 00:27:08,760 this, in this invisible link between audience and performers? How can we 229 00:27:08,760 --> 00:27:15,032 augment that? We started out with the perception and eye tracking part, we then 230 00:27:15,032 --> 00:27:22,367 went to wrist one device because it's easier to maintain and it's also wireless. 231 00:27:22,367 --> 00:27:30,089 And it worked quite well to stream 50 to 60 audience members for one of those 232 00:27:30,089 --> 00:27:38,710 events to a wireless router and do the recording, as well as the life 233 00:27:38,710 --> 00:27:43,043 visualization with it. However, the features might have not been. 234 00:27:43,043 --> 00:30:42,963 *Audio Failure* 235 00:30:42,963 --> 00:30:55,535 Okay. Sorry for the short part where it was offline. So, we were talking about a sense 236 00:30:55,535 --> 00:31:01,853 of features and best practices. So in this case, we are still searching for the right 237 00:31:01,853 --> 00:31:13,099 type of sensors and features to use for this type of audience, performer 238 00:31:13,099 --> 00:31:23,659 interaction. And we were using, the, yeah, the low frequency, high frequency ratio of 239 00:31:23,659 --> 00:31:28,789 the heart rate values and also the relative changes of the EDA. And that was 240 00:31:28,789 --> 00:31:35,381 working, I would say not that well, compared to other features that we now 241 00:31:35,381 --> 00:31:41,974 found while looking into the performances and the recorded data of the around, 98 242 00:31:41,974 --> 00:31:49,431 participants that agreed to share the data with us, for these performances. And from 243 00:31:49,431 --> 00:31:56,280 the preliminary analysis that Karen Han, one of our researchers working on and 244 00:31:56,280 --> 00:32:03,774 looking into what type of features are indicative of changes in the performance. 245 00:32:03,774 --> 00:32:10,899 It seems that a feature called PNN that's related to heart rate variability to the 246 00:32:10,899 --> 00:32:19,252 R-R intervals is, seems to be quite good. And also the peak detection per minute using 247 00:32:19,252 --> 00:32:25,349 the EDA data. So we're just counting the relative changes, the relative up and 248 00:32:25,349 --> 00:32:32,346 down, for the EDA. If you're interested I'm happy to share the data with you. So 249 00:32:32,346 --> 00:32:37,564 we have three performances each around an hour and 98 participants in 250 00:32:37,564 --> 00:32:45,658 total. And we have the heart rate data, the EDA data, from the two fingers as well 251 00:32:45,658 --> 00:32:54,257 as, the motion data as well. We haven't used the motion data at all except for 252 00:32:54,257 --> 00:33:00,384 filtering out a little bit the EDA and heart rate data because if you're moving a 253 00:33:00,384 --> 00:33:06,606 lot, you will have some errors and some problems, some motion artifacts in it. But 254 00:33:06,606 --> 00:33:15,370 what do I mean with why is the PNN or why is the EDA peak detection so nice? Let's 255 00:33:15,370 --> 00:33:20,679 look a little bit closer into the data. And here you see I just highlighted 256 00:33:20,679 --> 00:33:31,073 performance three from the previous plots. You see PNN50 on the left side is the scale, the 257 00:33:31,073 --> 00:33:40,491 blue line gives you the average of the PNN50 value. So this is the R-R interval 258 00:33:40,491 --> 00:33:47,813 related heart rate variability feature and that feature is especially related to 259 00:33:47,813 --> 00:33:54,658 relaxation and also to stress. So usually a lower PNN50 value means you're more 260 00:33:54,658 --> 00:34:00,991 relaxed and a higher value means that you're. No, higher value means that you're 261 00:34:00,991 --> 00:34:07,528 more relaxed, sorry. Lower value means that you are more stressed out. So what happens 262 00:34:07,528 --> 00:34:12,777 now in the performance is something that fits very, very well and correlates with 263 00:34:12,777 --> 00:34:18,863 the intention of the choreographer. Because the first half of the performance, 264 00:34:18,863 --> 00:34:27,264 you see section one, two, three, four, five and six on the bottom. The first half 265 00:34:27,264 --> 00:34:32,062 of the performance is to create a conflict in the audience and to stir them up a 266 00:34:32,062 --> 00:34:39,955 little. So, for example, also the business card scene is part of that part, or also 267 00:34:39,955 --> 00:34:47,823 the scene where somebody gets brought from the audience to the stage and joins the 268 00:34:47,823 --> 00:34:53,640 performance is also part of that versus the latter part is more about reflection 269 00:34:53,640 --> 00:34:59,347 and also relaxation. So taking in what you experienced at the first part, and that's 270 00:34:59,347 --> 00:35:03,623 something that you see actually quite nice in the PNN50. So at the beginning it's 271 00:35:03,623 --> 00:35:10,410 rather low. That means the audience is slightly tense versus in the latter part 272 00:35:10,410 --> 00:35:17,999 they more relaxed. Similarly, the EDA in the bottom as a bar chart gives you an 273 00:35:17,999 --> 00:35:23,559 indication of a lot of peaks happening at specific points. And these points 274 00:35:23,559 --> 00:35:31,316 correlate very well to memorable scenes in the performance. So seeing the one scene, 275 00:35:31,316 --> 00:35:36,329 where, actually section four, the red one, is the one where somebody from the 276 00:35:36,329 --> 00:35:41,592 audience gets brought onto the stage. Where is this? I think around minute 277 00:35:41,592 --> 00:35:52,512 twelve there is a scene where the dancers handout business cards. And that's 278 00:35:52,512 --> 00:35:56,159 also something, I think. So it's promising, we're not there yet definitely 279 00:35:56,159 --> 00:36:01,840 from the data analysis part, but there are some interesting things to see. And that 280 00:36:01,840 --> 00:36:11,232 kind of brings me back to the starting point. So I think, it was an amazing 281 00:36:11,232 --> 00:36:16,420 experience actually, working with a lot of talented people on that and the 282 00:36:16,420 --> 00:36:22,296 performance was a lot of fun, but we are slowly moving towards putting the audience 283 00:36:22,296 --> 00:36:28,499 on stage and trying to break the fourth wall, I think, with these type of setups. 284 00:36:28,499 --> 00:36:36,073 And that leads me then also to the end of the talk where I just have to do a shout 285 00:36:36,073 --> 00:36:42,500 out for the people who did the actual work. So all of the talented performers 286 00:36:42,500 --> 00:36:49,629 and the project lead, especially Moe who organized and was also the link between 287 00:36:49,629 --> 00:36:56,360 the artistic side and the dancers with Mademoiselle Cinema and us, as well as the 288 00:36:56,360 --> 00:37:04,953 choreographer Ito-san. And yeah, I hope I didn't miss anybody. So that's it. So 289 00:37:04,953 --> 00:37:13,965 thanks a lot for this opportunity to introduce this work to you. And now I'm 290 00:37:13,965 --> 00:37:21,338 open for a couple of questions, remarks. I wanted to also host a self organized 291 00:37:21,338 --> 00:37:25,715 session sometime. I haven't really gotten the link or anything, but I'll probably 292 00:37:25,715 --> 00:37:32,630 just post something on Twitter or in one of the chats if you want to stay in 293 00:37:32,630 --> 00:37:38,661 contact. I'll try to get two or three researchers also to join. I know George, 294 00:37:38,661 --> 00:37:44,260 who was working on the hardware, and Karen, who worked on the visualizations, 295 00:37:44,260 --> 00:37:53,072 the data analysis might be available. And if you interested in that, just send me an 296 00:37:53,072 --> 00:37:59,970 email or check, maybe, I just also add it to the blog post or so if I get the link 297 00:37:59,970 --> 00:38:05,339 later. So, yeah. Thanks a lot for the attention. 298 00:38:08,548 --> 00:38:16,686 Herald: Thanks, Kai, for this nice talk. For the audience, please excuse us for the 299 00:38:16,686 --> 00:38:22,028 small disruption of service we had here. We're a little bit late already, but I 300 00:38:22,028 --> 00:38:26,560 think we still have time for a question or so. Unfortunately, I don't see anything 301 00:38:26,560 --> 00:38:31,610 here online at the moment. So if somebody tried to pose a question and 302 00:38:31,610 --> 00:38:36,714 there was also disruption of service, I apologize beforehand for that. On the 303 00:38:36,714 --> 00:38:43,192 other hand now, Kai, you talked about data sharing. So how can the data be accessed? 304 00:38:43,192 --> 00:38:47,836 Do people need to access you or drop to you a mail or personal message? 305 00:38:47,836 --> 00:38:54,307 Kai: Yeah, we're on the, so right now, no publication is 306 00:38:54,307 --> 00:38:59,600 still accepted and there's also some issues actually, a little bit of some 307 00:38:59,600 --> 00:39:03,307 rights issues or so on. So the easiest part is just to send me a mail. 308 00:39:03,307 --> 00:39:14,380 It will be posted sometime next year on a more public website. But the easiest 309 00:39:14,380 --> 00:39:20,360 is just to post me a mail. There're already a couple of people working on it and we 310 00:39:20,360 --> 00:39:25,560 have the rights to share it. It's just a little bit of a question of setting it up. 311 00:39:25,560 --> 00:39:31,540 I wanted to have the website also online before the talk, but yeah, as with the 312 00:39:31,540 --> 00:39:35,320 technical difficulties and so on, everything is a little bit harder this year. 313 00:39:35,320 --> 00:39:43,060 Herald: Indeed. Indeed. Thanks, guys. Yes, I'd say that's it for this 314 00:39:43,060 --> 00:39:49,460 session. Thank you very much again for your presentation. And I'll switch back to 315 00:39:49,460 --> 00:39:53,087 the others. 316 00:39:53,087 --> 00:39:58,301 *postroll music* 317 00:39:58,301 --> 00:40:33,000 Subtitles created by c3subtitles.de in the year 2020. Join, and help us!