Quiz buttons in node.js

For our recent fall party we made a music quiz, and if you’re quizzin you gotta have some buttonpressin. We looked for quiz buttons at teknikmagasinet, surprisingly we didn’t find anything. So www what do you have to offer? Nothing? Well ok, let’s build some buttons in node.js. Real time web!

5 teams, connected to the same wifi, visit the laptop where node.js is running. They choose a team color (from #A0A to #EE0), and that team color is then removed from the other players screens to choose.

socket.on('claim_team',function(data){
   io.sockets.emit("remove_team",data);
});
If one team push their button no one else can push theirs until the quiz show host resets. On the laptop we see the scores and who pushed first.

Super simple code. The client tells the server that i pushed:

socketio.emit('ipushed', { team : myteam});

And on the server side, change a state and tell the scoreboard to dim the losers:

socket.on('ipushed',function(data){
   if (state == 'open'){
      state = 'locked';
      io.sockets.emit("they_pushed",data);
   }
});

buttons
Blue is winning!

The music questions were pretty nice ones too. Pop songs reworked in different ways – DJ Premierified, Brian Enofied, Bad youtube cover-ified, Dubified etc. This Enofied one was a hit.

Web audio music game

After we did the rhythm experiment at Music Hackday we fooled around a bit more with the web audio synthesizer. It’s pretty amazing that they got a built in synth in Chrome, and if you got some knowledge of synthesizers the different audio experiments out there (googles own minimoog for example) actually seems pretty straight forward, when you look at the api.

You basically just create an oscillator instance:
oscillator = context.createOscillator()
Give it some properties:
oscillator.frequency.value = 440
Create a filter:
filter = context.createBiquadFilter()
Choose filter type:
filter.type = 'LOWPASS'
etc.

It then works a lot like a modular synthesizer, where you connect all these boxes into each other to tamper with the audio signal before it hits the speaker. Like:
oscillator.connect(filter)
filter.connect(gain)

neworder

This is a simple game built with this chrome audio api. All sound is generated on the fly, and it only works in Chrome. You hear a melody, which then get split up into pieces – represented by boxes. You can click on a box to hear the part, and you should then move the boxes in the correct order of the original melody. It gets really hard pretty quickly, maybe some musical einstein could go up to 20 points or something.

Try it here: http://earthpeople.se/game_neworder

Our #musichackday hack – Stealing Feeling!

UPDATE: We actually won an iPad Mini courtesy of Echonest for the hack! Very encouraging!

Me, Fredrik and Adrian attended the Music Hackday in Stockholm, and made a little something.

In short:
1. Steal the feeling from someone who’s got more of it than you.
2. Apply it to your music.

It’s based around a 16 step javascript sequencer which a user can program using a drumkit generated in web audio. Then, apply the swing from Aretha Franklin or Squarepusher on your song.

As a bonus, the drumkit plays in the left channel, and a trig signal is sent in right channel, so you can hook up your CV/Gate synthesizers.

We call it Stealing Feeling.

Technologies used: Echonest API (for stealing the feeling), Javascript (for the sequencer), Web Audio HTML 5 API(for creating the drum sounds).

 

 

Line-out scrobbler – when DJ’ing

When DJ’ing at Debaser Slussen yesterday, we decided to hook up the Line out scrobbler to the DJ mixer. I knew that Echonest wouldn’t be able to resolve all weird stuff we play, but was hoping for at least 70% success rate. Unfortunately we had a resolve rate of about 20%, which make our little hack project quite a disappointment. We also had 3 incorrect resolves under the 7 hour long DJ set.

According to Echonest, the catalogue is only about 150 000 songs, and until this grows substantially, this project will be put on hold.

Here’s a clip of us in action last night:

Here’s the last.fm feed we scrobbled to:
http://www.last.fm/user/fredagsbiten

 

Line-out scrobbler

for years i’ve been trying to hijack music recognition services like shahzam to be able to recognize music. i’ve finally got this working thanks to the fine guys at echonest, who kindly provide a proper api for this. my proof of concept is running on a spare macbook.

here’s how it works on the mac atm:
1. the os x automator runs in a 90 second loop, recording audio with quicktime and then running a cli php script.
2. the php script first converts the recorded .mov file to .mp3 with ffmpeg/lame
3. the php script then runs the echonest binary for music fingerprinting, which generates a json string
4. part of the json string is sent via curl to the echonest service and (hopefully) resolved
5. echonest returns artist, title and a unique id. this id is saved i a recent-log so that the php script can skip a run if it runs 2 times during the same song
6. artist and title are curled back to our lamp-server, where we save all plays in a database (mongodb atm)
7. artist and title can then be sent to any service you wish to interact with

making this projekt was pretty straight forward, except for a minor obstacle which took me a few hours to figure out. it turns out that the echonest binary calls ffmpeg internally, and for some reason automator couldn’t find ffmpeg in it’s $PATH. when i realized this the fix was done in a second, i just needed to make a symlink from the $PATH automator used to where i had ffmpeg locally.

next step: will try this a few days and see how well it performs. if it’s good enough i’ll buy a tiny linux box and give it a pair of rca jacks.

links:
the earth people account on last.fm which we scrobble vinyl to atm
the awesome echonest service

NOW On Roskilde

Now On Roskilde is a mobile application which users easily can add to their homescreen.
It shows what’s on right now, together with related artists playing in the near future.

Since nothing is playing right now (+ the correct feed hasn’t been published yet), the app is pretty worthless at the moment. There is a way to get a preview though, by adding a querystring to the url. These urls below will fetch the Roskilde data feed from 2010.

# thu, july 1st 2010, 19:43
http://nowonroskilde.com/?timestamp=1278013434

# fri, july 2st 2010, 19:43
http://nowonroskilde.com/?timestamp=1278017704

nowonroskilde.com by Earth People is a quick hack put together for the Roskilde Labs competition. Also, feel free to use the php class we put together for this.

Extracting mood data from music


We have provided data from the Moody database to a group of researchers in the Netherlands, who have been doing some really interesting work. Menno van Zaanen gives us a report:

As a researcher working in the fields of computational linguistics and computational musicology, I am interested in the impact of lyrics in music. Recently, I have been researching what makes a song fit a particular mood. Why do we find some songs happy and others sad? Is it mainly the melodic part or mainly the lyrics that make the mood of a song? Do we agree upon this feeling or, in other words, do we consistently assign a mood to a song?

In particular, I am interested in building computational models that allow us to automatically assign a mood to a particular song. If we can successfully build such a model, we can also build a system that can filter songs from a large music collection based on their mood. For instance, your music player can create a playlist containing only happy songs by analyzing the songs in your music collection without manual interaction. Without such a system, people would need to listen to each song and indicate its mood by hand.

When building a computational model of mood in musical pieces, we need to have access to data. For instance, when we want to evaluate our model, we need musical pieces for which we know the corresponding mood, so we can check whether our model assigns the same mood as people have. Additionally, we use annotated data (songs together with their mood) as training data. This allows us to fine-tune our models to human preferences.

The data that we require for our computational model has to be annotated by people (as we are trying to model their preferences). Fortunately, the data collected using the Moody application fits our purposes exactly. People annotate their music into mood classes, which encodes information on what people think of these songs. This information is stored in a database for which the Moody plug-in acts as a graphical user interface.

The first dataset I received from the Moody database contained a list of songs, artists and mood tags, which I call the Moody Tags dataset. There is a total of 16 possible moods, which can be represented in a two-dimensional plane. One axis describes the valence, or polarity, of moods and the other axis describes arousal, or amount of energy. This looks exactly like the kind of information one needs when building a computational model of mood preferences.

While working with the Moody Tags dataset, I wondered in how far people consistently assign mood tags to music. The answer to this question cannot be found in the Moody Tags dataset, as there is only one tag assigned to each song. How exactly is this tag selected, taking into account that multiple people may annotate the same song with different mood tags?

It turns out that in the Moody database, tag counts are stored that describe for each value on the two axes, how often that value has been used to tag that particular song. Effectively, the Moody Tags dataset provides the tags that fit with the most often tagged value for both of the axes for each song. The raw counts of the tags can be extracted from the database as well and I will call that dataset the Moody Counts dataset.

Based on the Moody Counts dataset, I can analyze in how far people agree with each other when assigning mood to a song. Normally in the area of annotation, one would measure the amount of agreement, which is called inter-annotator agreement, using a metric such as Cohen’s Kappa or Krippendorff’s Alpha. Unfortunately, in this case, these metrics cannot be used, since I do not know exactly which annotator (user) tags which songs. Therefor, I need to come up with other metrics.

To make sure the dataset has enough annotations per songs, I checked the average number of annotations per song, which is approximately 19. The distribution of number of annotations per song can be found in the figure above. On the x-axis you can find the number of annotations per song and on the y-axis the number of songs with that many annotations. As you can see, there are just over 400 songs with only one annotation, but there are also songs with over 100 annotations.

Next, I check the percentage of songs that have only one mood assigned to it. This holds for 56.5% of the songs. However, it is a bit unfair to only provide this measure. First of all, there are quite a few songs that have only one annotation, which means that there can only be one mood assigned to these songs.

Computing the average percentage of the majority tag (which is the number of annotations of the most often selected mood divided by the total number of annotations for that song), shows that for both the arousal and valence dimensions this amounts to just over 95%. Again, here I am also using the songs with only one annotation, but if I remove these songs, so only take songs with two or more annotations into account, the percentages are still over 94%. Even when I only take songs into account that have 50 or more annotations, these percentages are over 91%.

Based on the data I have available at the moment, it seems that people tend to agree on the mood songs have. This makes it an interesting topic to research. Now we know that people generally agree, can we actually build a system that does this for us and will this system also be as consistent as we are in assigning mood to songs? Obviously, before we can actually answer this question, there are many other questions we need to answer first: What exactly contributes most to the mood of a songs, is it the melodic part or the lyrics? What properties of either melody or lyrics contribute most to the mood of a song? With datasets that are collected by applications such as Moody we may, at some point in the future, find out exactly how people perceive the mood of music.

Menno van Zaanen
Assistant professor at Tilburg University, the Netherlands

SimpleSong 0.2

SimpleSong is becoming quite an iTunes competitor, in it’s own way. I’ve used it quite a bit myself, much more often than I thought I would. If you’re tired of dealing with a library, and want a library free music player with a minimalistic approach, lightweight, and close to zero startup time – this might be your thing. Download SimpleSong SimpleSong Screenshot
New stuff:
– You can now doubleclick or drag tracks from the finder to the dock icon, to open the tracks as a playlist. This totally adds usefulness.

– Cmd + right arrow skips to next song.

– Fix of a small bug – the app didn’t stop looking for the next track in the end of a playlist.

So try a little do re mi fa!

Sing a simple song

If you’re like me you have a lot of music on your computer. You’re also fed up with maintaining a library of it all. Half the time I would just listen to music in Quick Look, if only it wouldn’t stop the moment I leave the finder.

So this is as simple as it gets. You enter a search term and SimpleSong does a Spotlight search on your computer and plays the search result. You can search for anything really – artist, album, genre, comment – the whole id3 tag is accessible in Spotlight. A more narrow search query is quicker to find though. You can now also drag tracks from the finder to the dock icon, to open the tracks as a playlist, which of course totally adds usefulness.

You play it, you stop it, you skip a song. That’s it.


Download SimpleSong

SimpleSong Screenshot

search for artist from iTunes

So, say you got this one great track in iTunes, and you want to look for more music from the same artist. Here’s a simple little script that can ease things up a bit.

Download it

This script makes a Amazon search in Firefox for the artist, but as you’re guessing you can make it go to what-ever site you like and search for content. Open the script and replace the url with a new url and save. You can also make it use Safari instead of Firefox.

To install it, place it in your home folder/Library/iTunes/Scripts
If that folder doesn’t exist – make a new one and call it Scripts.
Quit iTunes, restart and an applescript icon should appear in the iTunes menu bar, with your script ready to be clicked on.