Stupid Hackathon – again

Last february ~70 people got together and built some very weird stuff; a chatroulette clone for castanets, captchas to keep out cats, big data butt probes, tinder for woodpeckers and gah, so much more.

Personally, my fondest memory was the inclusive and jolly feeling. It truly felt like these 70 attendees are the great minds of our generation – who finally didn’t need to use their cognitive super powers to do work-stuff.

On February 10th 2018 we’re doing it all over again. Please attend.

/Peder and the rest of the bunch at Earth People

Motion tracking for DOOH

Earth People was approached by Forsman & Bodenfors to make a Digital Out Of Home (DOOH) screen come to life. The objective of the campaign was to showcase the client’s eye makeup products. The eyes were shot in 4k and our task was to make these eyes “look” at any motion in front of the screen.

Kinect to the rescue! Or?

We quickly agreed that Kinect would be the most suitable choice regarding tracking since Kinect 2 provide API’s for body tracking. After much trial and error we realized that the Kinect had too many cons versus pros in our case.

First of all; the hardware. In order to get any decent performance out the Kinect we needed a computer powerful enough to calculate the incoming data. This meant that our hardware footprint would be huge. Much too big for our tiny available space within the outdoors screen.

Secondly; speed. Kinect is meant to be enjoyed at home with a small group of people. There is time to move around the room to place you in such a position that the Kinect would recognize you and start calculating/tracking your body movement. We don’t have that kind of time when people are just passing by.

I’ve done several projects in the past using tracking and I knew that it would be possible to get good enough result using a plain old webcam. We don’t need 4K video or pin-point accuracy in this case.  We just need to know where people are, if there are any at all, in front of the screen.

Processing image data

Our solution for tracking comes in ~4 steps which are, by themselves, quite straight forward.

1. Get the raw webcam feed

This is a walk in the park easy. In our case we use processing and retrieve the webcam data using the standard Capture library.

2. What has changed?

Storing the previous frame from the webcam we can calculate the difference between the pixels of the current frame and the previous. This gives us a rough idea of what is going on in front of the camera. Doing this using bitwise operations gives us more processing power for our next computation.

3. Normalize the result

The data we get from the difference calculation is very noisy and blurred. This is due to the motion. In order to proceed we need to sanitize the output, removing “dust” and tiny pixel-changes. The small changes is probably not a person walking by anyway. The “clean” data from our normalization (or threshold) process is a great start but produce high level of motion all over the place. In this example, my t-shirt is moving a lot but we don’t necessarily want to track that.

4. Predictions

We know that high concentration of motion should be clustered together. My hand moving is producing much greater change than my t-shirt. By looking at each individual white pixel in our normalized output we can connect that pixel with the white pixels immediately surrounding it. Doing this recursively and by registering a new cluster of pixels when we no longer find any white pixels next to the cluster, we get a long list of isolated clusters we call “islands”.

The amount of data to process is still enormous. We need to boil this down, quickly. Here’s what we did; each cluster that is less than roughly 200 pixels is discarded immediately. In this example, we reduced the amount of islands from well above 150 down to about 5. This is still too much data for us to estimate where the motion is coming from. A last attempt to reduce the number of islands is brute-forced, merging any island covering another island measured by their bounding box (highlighted in green).

The largest island contain the most motion. This is what we are interested in.

5. What should the eyes look at?

This is not really tracking related but in order to get a single point of interest for our 15 eyes to look at we need to be more specific than “this island looks good enough”. Each time we receive a new island to focus on, we place a vector in the center of that island. This is our target. Our current focus vector is then moving closer to that target each frame, using a decay value so that we never actually reach the final target position.

What more can we do with this data?

Since we know how many pixels are changed between each frame, we can set a threshold of how many pixels that needs to contain motion before we proceed. If we never reach that threshold we may consider the current image as empty. No motion = no people = no need to process the image further.

There are many more things going on behind the scene but this should give a glimpse of what we ended up with and why.

SXSW Music Discovery

This year we decided to go to SXSW. It’s been a couple of years since last time, so I really hoped that the artist lineup and music schedule would be more comprehensible than it was last time we went. It wasn’t. To make sense of all this data, it needs personalization. Humans tend to be very specific when it comes to their taste in music, so just having a list of a couple of thousand artist names doesn’t cut it.

Since there were no other services available I figured that even a poor one could help people like ourselves. I started by scraping the SXSW website (5 lines of sloppy php):

Once we got the data from the SXSW website, it needs to be mashed with some Spotify metadata:

Haha I know! This is very poor code. Using MySQL as a cache… It’s amazing how such a good result can be made with such bad practices. I would (probably) never do this for a client, but for a sloppy side project like this – sure.

Anyway. Now when we’ve got all artists playing at the festival, and all the related artists, all we need is for people to sign in using Spotify’s oauth, fetch their top artists and do some id matching in our MySQL database to see what artists to recommend.

An added bonus was to automatically create a playlist in the user’s Spotify account. This required two more API calls, first create an empty playlist and then add all the track uri’s.

Some gotchas in the Spotify API we’ve learned (this might be bad advice, absolutely no idea…):
a) SXSW artists are often “up and coming” and have less than 1000 listens on Spotify. This makes Spotify unsure…

b) Rate limiting is real. When making lots of requests, be sure to sneak in a sleep(1) here and there. Seems to help.
c) Don’t sign requests you don’t need to sign. Since rate limits are counted towards your app id, this is a neat way of sneaking in some extra requests. Stuff like searching for related artists doesn’t currently require authentication.
d) Cache as much as you can before releasing your app. Fetching top tracks for 50 artists synchronously makes you hit the rate limit, and it will also hit your server’s network IO hard if you have a lot of simultaneous users. In this case we had a fixed set of artists, so it made sense to prefetch all their top tracks in a local db.

Sidenote: We deliberately chose to not name this service after SXSW to avoid trademark infringement. It’s called


Our new tool finds “hidden” WordPress pages exposed by just released WP REST API

In December WordPress 4.7 was released. The most cool part of this release was the inclusion of the WordPress REST API. In development for quite some time it was finally included in core.

The WordPress REST API is great for developers because it makes it very easy to get all pages, posts and users from a WordPress site and use them in any way they want, using JavaScript or PHP or basically any programming language.

Did we say all pages? Yup, that’s right. All Most of your posts, pages and users are exposed to the public with this API. That includes pages that have no public links to them and pages that are not available in any menus on your website.

So some of the WP devs here at Earth People got curious about the API and what exposed stuff we could find on those websites on the internet that had updated to 4.7. We figured that an easy way to test this was to create a Google Chrome extension.

Hello there WP Content Discovery Chrome Extension

So we made the the extension and we called it WP Content Discovery.

Here’s how it works:
It adds an icon to your chrome toolbar. By default it only displays the letter “w” as in WordPress. When you visit a WordPress powered website and it detects the API is lightens up and displays “API” in blue.

The extension icon in action. On the first site no API is detected. On the second site the API is detected and the icon shows a blue API text.

Now the fun starts: click the icon to get get a list of pages, posts and users on that website!

Here is an example from the website of admin activity logger Simple History:

Here we can see that the extension indeed did find some pages on the website we tried it on…

Please try the extension. And please let us know what you think here in the comments!

One last thing… the API may freak some people out…

Even if all the data that you can get publicly from the REST API is already available somewhere in WordPress, it does freak some people out that it actually is possible to get the content so easily.

It is however pretty easy to disable the API if you find it to scary.


Giphy reactions via SMS on an old CRT

Audience participation during a conference is tricky. You want it to be relevant so it’ll have to be some kind of tech that responds quickly. And at the same time it should be moderated, since the audience can write offensive stuff. Tricky stuff.

For our track at Internetdagarna I’ve built a little something that hopefully does this, and also shows what Creative Technology can be about.


  1. Get an old CRT TV. Because it looks cool.
  2. Connect a micro computer (like the C.H.I.P) via an old VCR (you need the VCR to do RF modulation from the micro controller composite output)
  3. Register a phone number on Twilio and forward incoming text messages to a database
  4. Write a few lines of crappy jQuery to poll this database and fetch a Giphy gif based on the text
  5. Run this crappy jQuery in a web browser on the micro controller

This way the audience can text reactions to the TV, and it’ll feel like a nice clash of new and old at the same time.

Post mortem. All this crappy tech + my crappy code made this setup crash every 15 minutes. Still great.

Creative Technology @ Internetdagarna

Just after my blog post about how my family uses Slack, the nice people at IIS asked if I wanted to host a track at Internetdagarna in November. Since the blog post came about while misusing technology, I figured the track should be about this. The name of the track: Creative Technology (because that’s the fancy name for goofing around with web stuff).


Friends and colleagues from the industry will participate, and what we really want is to inspire people to make new stuff using technology. We’ll be showcasing the limitations and possibilities with new platforms and ecosystems and hopefully inspire you to combine them into products and services only you can imagine. We’ll have people talking about prototyping hardware, explain the basics of machine learning, live code a bot and more. Not a lot of detail, more examples and inspiration.

Meet the speakers:
Sonja Petrovic Lundberg
Jakob Öhman
Sanna Frese
Carl Calderon
Magnus Östergren
David Eriksson
Adam Agnaou
Farvash Razavi
Darius Kazemi
Fredrik Mjelle
Maria Starck
Christian Heilmann
Fredrik Heghammar
Peder Fjällström

After this day you will probably have more than a few ideas of stupid things you want to learn and/or build stuff with. You don’t need to be a developer to attend, but being childish does help.

Get your ticket here:

Use the code 2016IND to get 20% off your ticket.


PS. People who like this also likes this.

Stupid Hackathon Sweden

In February 2016 an event took place in New York called “STUPID SHIT NO ONE NEEDS & TERRIBLE IDEAS HACKATHON”. The stuff from this hackathon both made me giggle and gave me hope.

The last 10 years on the web has been mostly about Salesforce integrations, paywalls, content marketing, gulp-or-grunt, webscale NoSQL and a/b-testing. This is all good, and our company is built on money from stuff like this. But I remember when the Internet was something else. When everyone with a Geocities-account made weird stuff no one had ever thought of. Meaningless stuff no one needs. Terrible ideas. It was an innocent and beautiful time.

Back in February, when my Twitter feed suddenly filled up with 3D cheese printers and Tinder for babies, it felt like I could breathe freely again.


In February, the first Swedish version will take place.
I hope it will at least make you giggle too. /

(All hacks can be found in this GoogleDoc)


Time tracking via Slack

Time tracking in general sucks. And we don’t timetrack unless we work in projects that we bill by the hour. Since we live our lives in Slack, and are pretty decent developers, I figured we could make this suck slighty less.


Each channel has a command called /timedude which takes a few options:
/timedude add 1h added spacer.gif (adds 1 hour for today along with a comment)
/timedude list (lists your own activity)
/timedude listall (lists everyones activity in this channel)
/timedude export (responds with a url to a csv export)

A new addition to Timedude is integration with, which is a git and CI/deploy platform we just started using. An hour or so, after the end of the workday, Timedude will check all git repos for commits, and if someone has made a commit to a repo and not reported this in the corresponding Slack channel, Timedude will ping the committer on Slack.

You are free to take our code and use it as you see fit. It doesn’t come with support, but it’s only like 200 lines of code in total so you can probably guess how it’s supposed to work.

(The code does very much suck, but it works. Refactor coming up. Any day now…)

Inline video in Mobile Safari on iOS

Recently we released a game for one of our clients, Red Bull. The game feature two awesome breakdancers which you control. Hitting the markers at the beat of the music gives you performance and accuracy points.

A video posted by earth people (@earthpe0ple) on

The biggest challenge we faced during the R&D was to play video of the dancers whilst allowing the player to interact. Currently, there is no straightforward way of doing this on iPhone devices. Any playing video tag will automatically startup in fullscreen using the built in media player. This is where our little hack came useful.

Through extensive testing and research we managed to extract image-data from a video-tag. Setting the currentTime property will stealthily move the playhead of the video and make that frame available for rendering onto a canvas. Realizing this we created our own custom player capable of extracting 25 frames a second, move to defined clips and perform loops. All on the “incompatible” iPhone.

Video compression played a huge role on the outcome. Usually videos enter keyframes around each second of video material. This also means that seeking in videos often snap to those keyframes. We did however discover that our video-material got even smaller in file-size setting the keyframes to every frame and allowed us to seek freely.


Performance is a pain and video buffers doesn’t make it any easier. Video-elements only automatically buffer a certain amount of data. Jumping back and forth in a video shifts the buffers around while playing resulting in lag and dropped frames. This wasn’t good enough for gaming. Solution; Blobs. Loading the complete video into a Blob, from which the video-player can read from, removed the delays making video access instant. Now, this can be solved using several different techniques, for instance Application Cache, but we came to the conclusion that Blobs and trust in the regular browser cache would be sufficient and optimal in terms of performance.


Controlling ssh access with GitHub organizations

Screen Shot 2016-04-27 at 17.54.47

Ok, I’m coming clean. Controlling access to our various servers has been a mess. Sure, we’ve stored passwords in a safe way (1Password for teams ftw!) but what happens if someone leaves the company or that root password somehow were to get out… Well, we did not have a plan for such a thing.

Sure, setting up ssh keys is easy, but we never got around to it. We manage more than a handful servers, and making sure the authorized_keys on these boxes is up to date just felt unmanageable.

This changed today when i got the idea to make use of this GitHub’s feature which exposes the public keys. I wrote a little script that fetches all users within our GitHub organization, pulls down the public keys and updates the ~.ssh/authorized_keys-file nightly with a cron job.

Yes, this is PHP but when all you’ve got is a hammer – everything looks like a nail. This needs some error handling too, but I thought I’d share it anyway.