!!Con West 2020 - don-E Merson: Sonification: You Can See Your Complex Data with Sound!!

Mar 20, 2020 18:52 · 2379 words · 12 minute read captioner use multisyllable words 09

Hi. I’m gonna be talking about sonification. So first, who I am. I’m don-E Merson. That’s me before I donated my hair. I’m from the University of Arizona, and I am a PhD student at the School of Information. I’m also a programmer. Full stack developer by trade. I’m doing my PhD part-time, and hopefully eventually will get through it. And what you’re about to hear is what I’m gonna do my thesis on. So the basic game plan for the next ten minutes is: What is sonification? In case you’ve never heard of this term before. What are some of the problems around that? And what are some of the solutions to these problems? And how can sonification let us see more dimensions? And that’s really the gist of what I’m really interested in, is being able to see more dimensions.

Not see with eyes, but actually perceive more 01:19 - dimensions simultaneously. And that’s what I think we can do with sonification. And I’ll talk a little about my future research, give you my email, in case you’re interested in being part of that research. So what is it? So first let’s think about visualization. What is visualization? Visualization is data turned into some kind of visual representation. That’s a tough word. I shouldn’t use multisyllable words here at the beginning. And sonification is data turned into audible representations. Right? So Wikipedia says: The sonification is the use of non-speech audio to convey information or perceptualize data. And part of me right off the bat wants to say: Why do we care about that it’s non-speech data? Speech data could be one more dimension that we put in there. So sonification in real life. You know about sonification. You just never realized it was sonification. So a Geiger counter is an example of sonification.

It’s one dimensional data that you’re walking around, and it’s telling 02:19 - you… gee-gee-gee. I apologize to whoever is captioning. Because there will be lots of sound effects and stuff like that too. There’s that. Heartbeat monitor. Once again, something that we can hear. And the rhythm will tell you whether or not – I won’t do the heartbeat. I know you’re waiting for me to do that. And then radar. Radar has the sound, once again. And so these are ways that we can perceive information with sound. These are sonification in real life. So advantages and disadvantages. Visualization – the pro is you can see all the data at once.

It’s gonna be the opposite for sonification, obviously. And it’s a known entity. Ever since you were in grade school, you’ve been taught how to understand data in some kind of form like a bar chart, a pie chart, or something like that. So the con is there’s a limited number of dimensions. There’s really X, Y, and kind of Z. Z.5, maybe. Sonification – the pro is you can perceive multiple dimensions. And this is the part to me that’s the most interesting. And I kind of came about it in a really weird way. In a former life, I was a musician. And I was learning this – creating this music theory, and I was trying to get all these dimensions that I was hearing musically, and put them down on paper, and it finally hit me. Wait. I’m going the wrong way. I can’t put all these dimensions that I’m hearing -- I can’t talk about all these things on sound and then put them down on paper. Because music notation is kind of weak in that way. And there’s all these dimensions that I’m hearing, that are not on the page when I put it down, so I decided: Let’s try to get it another way.

If you’re not 100% sure what 03:54 - I mean by all these different dimensions, let me give you an example. When I was a music teacher, I had a student. He would come in, and people would bring their songs in. I would teach them music. He came in. He was into punk rock. I taught him some power chords, he had an electric guitar, and he came up to me one day and said: Can you show me how they’re doing it downstairs? He was just playing what you were playing, but he didn’t hear it that way. I finally realized what the problem was. Went out to my car, got my distortion pedal, put in the distortion pedal. He’s like…

Oh yeah! That’s exactly what I want to hear! 04:30 - To him, it wasn’t the tone – it was the tone that was the important part. It wasn’t the actual pitch. Right? He was hearing something that was more important to him, that actually wasn’t down on the music. So that’s an example of one of those important dimensions that we don’t hear. You don’t write down “I have a distortion”, or “I have an overdrive” or chorus or something like that. You don’t write that down in the written music. All these other dimensions you hear and understand but you don’t do it. The problem is unknown entity.

04:59 - You probably never heard of sonification before we came today. You can’t see the data all at once. A Geiger counter, you don’t have a readout of all the data you’ve seen as you walk around. That’s the problem. And limited range of hearing. You can only hear so far. So there’s gonna be a mapping of that. There’s gonna be a compression of that, a lot of times. You can have really wide information. So… Multidimensional data. So what I mean by that is data with multiple columns. So visualizations -- you’ll see something like this.

You have 05:29 - the two real dimensions, the X and the Y, and you have these ocular variables. We have a few ocular variables. You see one right there. It’s color. That color can let you understand what’s going on here. You can’t really see this. This is three types of biological entities. But you see you have X and Y, it’s a positive correlation, and you see by color that these are related to these X and Y variables, very much so. Right? Another thing you can do is patterns.

So if you ever had a heat map, and sometimes you’ll have some kind of 05:59 - scratch pattern that you would see. Or another type of pattern would be like a – if you ever work with R and they always have the obligatory ggplot cars data, and it has triangles or circles as part of a dimension, and there’s size. Something could be bigger or smaller. And that can show another dimension of the data that you have. And there’s faceting, which is basically taking a picture and doing it multiple times. So you have four dimensions that you see all at once. So sonification has lots of different things. For one thing, it’s got pitch. So pitch is something that we hear right away. We can hear something…

06:34 - What pitch it is, and what we’re really good at is hearing timbre or tone. We can tell the difference between a guitar and piano instantly. We don’t have to think about it. It’s just instant into our brain. Rhythm – we have a good idea of basic rhythm. So when we hear duh-duh-duh or duh-duh-DUH. We can hear those are really different? We can understand that. Again, apologies to the captioner. How does that come across? Awesome. And then there’s harmony. Harmony is basically chords, a whole bunch of notes put together, that end up having another quality to it. There’s major and minor, like…

When you’re first learning, you think 07:13 - minor chords sound sad and major chords sound happy and stuff like that. There’s much more different types of harmony out there. And there’s melody. We use melody all the time. For example, you might hear something like… duhhh-duhh… (Star Wars) we know what’s gonna happen. John Williams has told us the good guys are about to do something. Dunn-dunn… (imperial March) if we hear that… Here’s that guy. The Darth Vader song comes on.

That’s 07:44 - a bit of data! In case you’re not paying attention, falling asleep in the chair in the movie theater. Here’s what you got. We have that ability to hear multiple types. We can hear the Darth Vader theme with a different pitch, a different instrument, maybe, and we can all of a sudden perceive multiple dimensions simultaneously. Also, placement in the aural field. So we are very good if we have headphones on – you can hear left, right, and center. In the old days, they used to use that a lot with stereo. One guy would be on the left, one guy on the right.

Actually, there’s been studies that we actually are better at perceiving sound 08:24 - around us like that than we are with our eyes. So our aural field is actually really good. There are people who are actually doing this right now. If you’ve ever heard the song New York State of Mind, there’s an ABAA pattern -- if you have the headphones on, you’ll go left, right, left, left. Left, right, left, left. In the actual thing. So musicians already know it. They’re already using it. They kind of intuitively know it. They’re not thinking about what that could actually mean. And there’s echo. We’re really good at hearing echo. You can tell I’m in a room right now that’s kind of big. Versus if I was in a stadium.

You 09:03 - can tell when people are in a stadium – if you ever heard your favorite band in a rock stadium, you can understand that echo. We can hear that. We just don’t have a way to talk about that yet. Right now. So the early experiments all used pitch to indicate size. So the data unfolds in time. Think about the Geiger counter, the heartbeat monitor, the data unfolding in time. There’s another one. If anyone is interested in the gravity waves that were just discovered, the way they’re representing the data is through sonification! It’s this bllooooooop! (laughter) Awesome. So… That’s one of the things they did. But there’s some problems. It’s really only one dimensional. Which is a big thing.

And mapping the pitch – that’s another problem 09:52 - that we’ll have to figure out. But it also takes – takes away the multiple – we haven’t really talked about the multidimensional attributes of it. And it’s a problem, because most people who are not musicians can’t hear the difference between small pitch changes. Right? So you really… Having something like that, you’re kind of excluding people who aren’t musically trained. So this is where my research ideas come in. So I want to use sonification to expand the dimensions perceived.

And I’d like to use it with a mouse with a tooltip. Imagine… Also add more dimensions visually. Imagine like a heat map. You have a heatmap and with your mouse… Am I done? Okay. All right. Sorry. I heard sound and reacted to it! (laughter) So as I mouse over something, I can hear multiple musical tones that explain something about what I’m actually touching at that moment. So that’s the idea here. What I’m trying to do is find heat maps of the ear. So heat maps break down quantitative data into easy to distinguish colors. So it’s not so much tone in the way that it’s – you know, it’s a C versus a B, et cetera. But it’s more like… Because people can’t hear these minor changes.

11:10 - So break these up into easier distinguishing pitches. For example, low, medium, high. Right? For example… Other dimensions are easy to break up. Short melodies. We already did the John Williams thing with those little melodies. Different instruments. If a different instrument is playing a different melody, that’s two dimensions that we can hear at a time, simultaneously. Left, right, and center. If we hear it on the left, it means something. If we hear it on the right, it means something, if we hear it in the center, it means something, if we hear all three, it might mean something else. The stadium, the small room, no echo, these are three different dimensions.

We would break up and discretize the data into these ideas 11:50 - and simultaneously be able to put all the information together. So that’s what my research is about in the next two years. What different dimensions can people perceive simultaneously? This is how I can hear it, but I was a trained musician. Maybe the normal person is not gonna be able to hear this. Part of that is gonna be some research time and actually… Hey, here’s what this means. I play this.

Can you tell me what that means? Does it matter if 12:14 - they’re a musician or not, which is probably gonna be a big question, and then I’m gonna build a prototype tool to allow the configuration of different sonic dimensions. Once I know what people perceive, I’m gonna build this tool that is gonna allow people to take their data and be able to sonify it, and then test the usability of others, put together sonifications of other ideas to others. When you give the tool to someone and see what they do, actually help to see if that information is getting conveyed. So that’s the idea there. If anyone is interested, I’m just starting. I’m working on my comprehensive this summer. That’s my email address right there. dmerson@arizona.edu. Talk to me, I’ve got a lot of cards. If you’re interested in the subject or have any ideas, I’m really early on this, so I’m happy to hear them.

If anyone has ideas, I’m really excited to hear about them. So in conclusion, sonification is turning data into sound. Early experiments didn’t use multidimensional aspects. And I’m looking to use heat maps of sound. And researching people’s abilities to understand these multiple dimensions of sound. So that’s my talk. Thank you! .