Politiker goes influencer

Influencers tjänar mycket pengar.

Politiker tjänar mycket pengar.

Så hur tjänar man asmycket pengar? Var både politiker och influencer på samma gång!

Hur skulle detta se ut då? Jag byggde en fejksajt som ser ut som instagram.com där jag hämtar in data från våra mest kända influencers och byter ut namnen mot kända politiker. Bilderna är hämtade från google images med ett sökord taget från captionen.

Det blev kul men väljer att inte länka pga lite på gränsen rättighetsmässigt.

/ Johanna

 

All tystnad från Sveriges Radio P2

För en tid sedan tog jag tjuren vid hornen och byggde ett site som samlar all tystnad från P2. Om du, som jag, lyssnar en del på denna radiokanal kanske ni också tänkt på att de tillåter mycket mer tystnad än andra kanaler. Det är konstpauser, utklingande stråkkonserter och på det hela taget en mycket större dynamik än man som kommersiell radiolyssnare är van vid. Det finns en rofylldhet i denna dynamik, och tystnaden i synnerhet. Självklart vill man lyssna på BARA den.

För att kunna samla all tystnad från P2 behövdes bara ett par rader ffmpeg wrappad i diverse fulphp. Och en enkel site som kunde spela upp allt tillsammans.

All kod här och själva siten här.

/Peder

Stupid matkasse

Trenden med färdiga matkassar är ett hot mot vår nyfikenhet och kreativitet i köket. Alla lagar samma mat. Snart har hela svenska hemkökets utveckling stagnerat helt.

Exakt det tänkte jag inte, när jag bestämde mig för att göra mitt (prisvinnande!) hack på årets Stupid Hackathon. Jag tänkte nog bara att det kanske kunde bli lite roligt. I verkligheten visade det sig bli hysteriskt roligt. Om det berodde på trötthet eller att vissa grejer blir roligare när många skrattar samtidigt vet jag inte.

Recepten hittar du här: https://stupid-matkasse.firebaseapp.com/

/Hjalle

Kryptosemla – ät en semla in the blockchain

Semlans värde har gått uppåt över tid.

Efter semmelwrap och nachosemla kommer nu Kryptosemlan! Du kan äntligen äta en semla in the blockchain. Mining av krytposemlan sker genom att vispa fram och tillbaka på skärmen, och mjölksyran i armen blir alltså ditt proof of work.

Du kan äta en kryptosemla, eller ge bort den till någon annan. I bakgrunden ligger en blockchain där de transaktioner som sker hashas. Vid mining får du en belöning på 1 semla, men på fettisdagen var belöningen 3 kryptosemlor.

När Bitcoin minas tar man fram en hash på föregående innehåll och en giltig hash är en som börjar med ett antal nollor, tex

00000000000000000019296c5484b73953932977e976815dd6bc6cbd24d2d686

För en kryptosemla krävs det en hash som börjar med ce (som i semla) – tex

ceb93efb8b5a7dc7fe1b7ec663e37644f587200b22bacbc19ce02040fdaf00f7

Det krävs alltså betydligt mycket mindre processor-kraft att hitta en giltig hash för kryptosemla än för bitcoin, men däremot mer vispkraft. Varje visptag räknar fram en möjlig hash-lösning och när du vispat fram en giltig hash så görs ett anrop till servern som ser om hann före andra minare, varpå du får belöningen…!

Eftersom kryptosemlans värde i början ännu var ovisst så dök det upp minare som försökte gå runt systemet för att få semlor, tex. genom att använda datakraft istället för gräddvispande. Det gjordes även allvarliga ddos-attacker för att skada hela semmelekonomin. Istället för att äta semlor gav förövarna hundratusentals semlor till sig själva. Semlor cirkulerade på sätt som den gräddbaserade blockchain-tekniken inte var mogen för, och siten fick stänga ned på askonsdagen.

Detta är ett projekt från Stupid Hackathon feb 2018. En kryptosemla är nu värd cirka $11.

/ Fredrik

vabruari.sucks

Alla småbarnsföräldrar vet hur vidrig februari är. Förskolorna svämmar över av bakterier och antalet kompletta arbetsdagar går räkna på en hand. På förskolornas ytterdörrar sitter alltid lappar med information om vilka sjukdomar som går för tillfället.

Till årets Stupid Hackathon valde vi att tolka den känslan genom att göra en förskolelappsgenerator – www.vabruari.sucks. Den hämtar in sjukdomar från diverse källor; Vårdguiden, Wikipedia och andra sajter samt skapar en fotorealistisk bild ifrån en av Stockholms förskolor. Förskolenamnen hämtas från Stockholm.se.

Och ja – ibland går det över gränsen. Att datan dynamiskt hämtas in gör att det är svårt att svartlista direkt opassande sjukdomar.

/Peder & Andreas

Stupid Hackathon – again

Last february ~70 people got together and built some very weird stuff; a chatroulette clone for castanets, captchas to keep out cats, big data butt probes, tinder for woodpeckers and gah, so much more.

Personally, my fondest memory was the inclusive and jolly feeling. It truly felt like these 70 attendees are the great minds of our generation – who finally didn’t need to use their cognitive super powers to do work-stuff.

On February 10th 2018 we’re doing it all over again. Please attend.

/Peder and the rest of the bunch at Earth People

Motion tracking for DOOH

Earth People was approached by Forsman & Bodenfors to make a Digital Out Of Home (DOOH) screen come to life. The objective of the campaign was to showcase the client’s eye makeup products. The eyes were shot in 4k and our task was to make these eyes “look” at any motion in front of the screen.

Kinect to the rescue! Or?

We quickly agreed that Kinect would be the most suitable choice regarding tracking since Kinect 2 provide API’s for body tracking. After much trial and error we realized that the Kinect had too many cons versus pros in our case.

First of all; the hardware. In order to get any decent performance out the Kinect we needed a computer powerful enough to calculate the incoming data. This meant that our hardware footprint would be huge. Much too big for our tiny available space within the outdoors screen.

Secondly; speed. Kinect is meant to be enjoyed at home with a small group of people. There is time to move around the room to place you in such a position that the Kinect would recognize you and start calculating/tracking your body movement. We don’t have that kind of time when people are just passing by.

I’ve done several projects in the past using tracking and I knew that it would be possible to get good enough result using a plain old webcam. We don’t need 4K video or pin-point accuracy in this case.  We just need to know where people are, if there are any at all, in front of the screen.

Processing image data

Our solution for tracking comes in ~4 steps which are, by themselves, quite straight forward.

1. Get the raw webcam feed

This is a walk in the park easy. In our case we use processing and retrieve the webcam data using the standard Capture library.

2. What has changed?

Storing the previous frame from the webcam we can calculate the difference between the pixels of the current frame and the previous. This gives us a rough idea of what is going on in front of the camera. Doing this using bitwise operations gives us more processing power for our next computation.

3. Normalize the result

The data we get from the difference calculation is very noisy and blurred. This is due to the motion. In order to proceed we need to sanitize the output, removing “dust” and tiny pixel-changes. The small changes is probably not a person walking by anyway. The “clean” data from our normalization (or threshold) process is a great start but produce high level of motion all over the place. In this example, my t-shirt is moving a lot but we don’t necessarily want to track that.

4. Predictions

We know that high concentration of motion should be clustered together. My hand moving is producing much greater change than my t-shirt. By looking at each individual white pixel in our normalized output we can connect that pixel with the white pixels immediately surrounding it. Doing this recursively and by registering a new cluster of pixels when we no longer find any white pixels next to the cluster, we get a long list of isolated clusters we call “islands”.

The amount of data to process is still enormous. We need to boil this down, quickly. Here’s what we did; each cluster that is less than roughly 200 pixels is discarded immediately. In this example, we reduced the amount of islands from well above 150 down to about 5. This is still too much data for us to estimate where the motion is coming from. A last attempt to reduce the number of islands is brute-forced, merging any island covering another island measured by their bounding box (highlighted in green).

The largest island contain the most motion. This is what we are interested in.

5. What should the eyes look at?

This is not really tracking related but in order to get a single point of interest for our 15 eyes to look at we need to be more specific than “this island looks good enough”. Each time we receive a new island to focus on, we place a vector in the center of that island. This is our target. Our current focus vector is then moving closer to that target each frame, using a decay value so that we never actually reach the final target position.

What more can we do with this data?

Since we know how many pixels are changed between each frame, we can set a threshold of how many pixels that needs to contain motion before we proceed. If we never reach that threshold we may consider the current image as empty. No motion = no people = no need to process the image further.

There are many more things going on behind the scene but this should give a glimpse of what we ended up with and why.

SXSW Music Discovery

This year we decided to go to SXSW. It’s been a couple of years since last time, so I really hoped that the artist lineup and music schedule would be more comprehensible than it was last time we went. It wasn’t. To make sense of all this data, it needs personalization. Humans tend to be very specific when it comes to their taste in music, so just having a list of a couple of thousand artist names doesn’t cut it.

Since there were no other services available I figured that even a poor one could help people like ourselves. I started by scraping the SXSW website (5 lines of sloppy php):

Once we got the data from the SXSW website, it needs to be mashed with some Spotify metadata:

Haha I know! This is very poor code. Using MySQL as a cache… It’s amazing how such a good result can be made with such bad practices. I would (probably) never do this for a client, but for a sloppy side project like this – sure.

Anyway. Now when we’ve got all artists playing at the festival, and all the related artists, all we need is for people to sign in using Spotify’s oauth, fetch their top artists and do some id matching in our MySQL database to see what artists to recommend.

An added bonus was to automatically create a playlist in the user’s Spotify account. This required two more API calls, first create an empty playlist and then add all the track uri’s.

Some gotchas in the Spotify API we’ve learned (this might be bad advice, absolutely no idea…):
a) SXSW artists are often “up and coming” and have less than 1000 listens on Spotify. This makes Spotify unsure…


b) Rate limiting is real. When making lots of requests, be sure to sneak in a sleep(1) here and there. Seems to help.
c) Don’t sign requests you don’t need to sign. Since rate limits are counted towards your app id, this is a neat way of sneaking in some extra requests. Stuff like searching for related artists doesn’t currently require authentication.
d) Cache as much as you can before releasing your app. Fetching top tracks for 50 artists synchronously makes you hit the rate limit, and it will also hit your server’s network IO hard if you have a lot of simultaneous users. In this case we had a fixed set of artists, so it made sense to prefetch all their top tracks in a local db.

Sidenote: We deliberately chose to not name this service after SXSW to avoid trademark infringement. It’s called austindiscovery.earthpeople.se.

/Peder

Our new tool finds “hidden” WordPress pages exposed by just released WP REST API

In December WordPress 4.7 was released. The most cool part of this release was the inclusion of the WordPress REST API. In development for quite some time it was finally included in core.

The WordPress REST API is great for developers because it makes it very easy to get all pages, posts and users from a WordPress site and use them in any way they want, using JavaScript or PHP or basically any programming language.

Did we say all pages? Yup, that’s right. All Most of your posts, pages and users are exposed to the public with this API. That includes pages that have no public links to them and pages that are not available in any menus on your website.

So some of the WP devs here at Earth People got curious about the API and what exposed stuff we could find on those websites on the internet that had updated to 4.7. We figured that an easy way to test this was to create a Google Chrome extension.

Hello there WP Content Discovery Chrome Extension

So we made the the extension and we called it WP Content Discovery.

Here’s how it works:
It adds an icon to your chrome toolbar. By default it only displays the letter “w” as in WordPress. When you visit a WordPress powered website and it detects the API is lightens up and displays “API” in blue.

The extension icon in action. On the first site no API is detected. On the second site the API is detected and the icon shows a blue API text.

Now the fun starts: click the icon to get get a list of pages, posts and users on that website!

Here is an example from the website of admin activity logger Simple History:

Here we can see that the extension indeed did find some pages on the website we tried it on…

Please try the extension. And please let us know what you think here in the comments!

One last thing… the API may freak some people out…

Even if all the data that you can get publicly from the REST API is already available somewhere in WordPress, it does freak some people out that it actually is possible to get the content so easily.

It is however pretty easy to disable the API if you find it to scary.

 

Giphy reactions via SMS on an old CRT

Audience participation during a conference is tricky. You want it to be relevant so it’ll have to be some kind of tech that responds quickly. And at the same time it should be moderated, since the audience can write offensive stuff. Tricky stuff.

For our track at Internetdagarna I’ve built a little something that hopefully does this, and also shows what Creative Technology can be about.

 

  1. Get an old CRT TV. Because it looks cool.
  2. Connect a micro computer (like the C.H.I.P) via an old VCR (you need the VCR to do RF modulation from the micro controller composite output)
  3. Register a phone number on Twilio and forward incoming text messages to a database
  4. Write a few lines of crappy jQuery to poll this database and fetch a Giphy gif based on the text
  5. Run this crappy jQuery in a web browser on the micro controller

This way the audience can text reactions to the TV, and it’ll feel like a nice clash of new and old at the same time.

Post mortem. All this crappy tech + my crappy code made this setup crash every 15 minutes. Still great.