Stupid matkasse

Trenden med färdiga matkassar är ett hot mot vår nyfikenhet och kreativitet i köket. Alla lagar samma mat. Snart har hela svenska hemkökets utveckling stagnerat helt.

Exakt det tänkte jag inte, när jag bestämde mig för att göra mitt (prisvinnande!) hack på årets Stupid Hackathon. Jag tänkte nog bara att det kanske kunde bli lite roligt. I verkligheten visade det sig bli hysteriskt roligt. Om det berodde på trötthet eller att vissa grejer blir roligare när många skrattar samtidigt vet jag inte.

Recepten hittar du här: https://stupid-matkasse.firebaseapp.com/

/Hjalle

Kryptosemla – ät en semla in the blockchain

Semlans värde har gått uppåt över tid.

Efter semmelwrap och nachosemla kommer nu Kryptosemlan! Du kan äntligen äta en semla in the blockchain. Mining av krytposemlan sker genom att vispa fram och tillbaka på skärmen, och mjölksyran i armen blir alltså ditt proof of work.

Du kan äta en kryptosemla, eller ge bort den till någon annan. I bakgrunden ligger en blockchain där de transaktioner som sker hashas. Vid mining får du en belöning på 1 semla, men på fettisdagen var belöningen 3 kryptosemlor.

När Bitcoin minas tar man fram en hash på föregående innehåll och en giltig hash är en som börjar med ett antal nollor, tex

00000000000000000019296c5484b73953932977e976815dd6bc6cbd24d2d686

För en kryptosemla krävs det en hash som börjar med ce (som i semla) – tex

ceb93efb8b5a7dc7fe1b7ec663e37644f587200b22bacbc19ce02040fdaf00f7

Det krävs alltså betydligt mycket mindre processor-kraft att hitta en giltig hash för kryptosemla än för bitcoin, men däremot mer vispkraft. Varje visptag räknar fram en möjlig hash-lösning och när du vispat fram en giltig hash så görs ett anrop till servern som ser om hann före andra minare, varpå du får belöningen…!

Eftersom kryptosemlans värde i början ännu var ovisst så dök det upp minare som försökte gå runt systemet för att få semlor, tex. genom att använda datakraft istället för gräddvispande. Det gjordes även allvarliga ddos-attacker för att skada hela semmelekonomin. Istället för att äta semlor gav förövarna hundratusentals semlor till sig själva. Semlor cirkulerade på sätt som den gräddbaserade blockchain-tekniken inte var mogen för, och siten fick stänga ned på askonsdagen.

Detta är ett projekt från Stupid Hackathon feb 2018. En kryptosemla är nu värd cirka $11.

/ Fredrik

vabruari.sucks

Alla småbarnsföräldrar vet hur vidrig februari är. Förskolorna svämmar över av bakterier och antalet kompletta arbetsdagar går räkna på en hand. På förskolornas ytterdörrar sitter alltid lappar med information om vilka sjukdomar som går för tillfället.

Till årets Stupid Hackathon valde vi att tolka den känslan genom att göra en förskolelappsgenerator – www.vabruari.sucks. Den hämtar in sjukdomar från diverse källor; Vårdguiden, Wikipedia och andra sajter samt skapar en fotorealistisk bild ifrån en av Stockholms förskolor. Förskolenamnen hämtas från Stockholm.se.

Och ja – ibland går det över gränsen. Att datan dynamiskt hämtas in gör att det är svårt att svartlista direkt opassande sjukdomar.

/Peder & Andreas

Stupid Hackathon – again

Last february ~70 people got together and built some very weird stuff; a chatroulette clone for castanets, captchas to keep out cats, big data butt probes, tinder for woodpeckers and gah, so much more.

Personally, my fondest memory was the inclusive and jolly feeling. It truly felt like these 70 attendees are the great minds of our generation – who finally didn’t need to use their cognitive super powers to do work-stuff.

On February 10th 2018 we’re doing it all over again. Please attend.

/Peder and the rest of the bunch at Earth People

Motion tracking for DOOH

Earth People was approached by Forsman & Bodenfors to make a Digital Out Of Home (DOOH) screen come to life. The objective of the campaign was to showcase the client’s eye makeup products. The eyes were shot in 4k and our task was to make these eyes “look” at any motion in front of the screen.

Kinect to the rescue! Or?

We quickly agreed that Kinect would be the most suitable choice regarding tracking since Kinect 2 provide API’s for body tracking. After much trial and error we realized that the Kinect had too many cons versus pros in our case.

First of all; the hardware. In order to get any decent performance out the Kinect we needed a computer powerful enough to calculate the incoming data. This meant that our hardware footprint would be huge. Much too big for our tiny available space within the outdoors screen.

Secondly; speed. Kinect is meant to be enjoyed at home with a small group of people. There is time to move around the room to place you in such a position that the Kinect would recognize you and start calculating/tracking your body movement. We don’t have that kind of time when people are just passing by.

I’ve done several projects in the past using tracking and I knew that it would be possible to get good enough result using a plain old webcam. We don’t need 4K video or pin-point accuracy in this case.  We just need to know where people are, if there are any at all, in front of the screen.

Processing image data

Our solution for tracking comes in ~4 steps which are, by themselves, quite straight forward.

1. Get the raw webcam feed

This is a walk in the park easy. In our case we use processing and retrieve the webcam data using the standard Capture library.

2. What has changed?

Storing the previous frame from the webcam we can calculate the difference between the pixels of the current frame and the previous. This gives us a rough idea of what is going on in front of the camera. Doing this using bitwise operations gives us more processing power for our next computation.

3. Normalize the result

The data we get from the difference calculation is very noisy and blurred. This is due to the motion. In order to proceed we need to sanitize the output, removing “dust” and tiny pixel-changes. The small changes is probably not a person walking by anyway. The “clean” data from our normalization (or threshold) process is a great start but produce high level of motion all over the place. In this example, my t-shirt is moving a lot but we don’t necessarily want to track that.

4. Predictions

We know that high concentration of motion should be clustered together. My hand moving is producing much greater change than my t-shirt. By looking at each individual white pixel in our normalized output we can connect that pixel with the white pixels immediately surrounding it. Doing this recursively and by registering a new cluster of pixels when we no longer find any white pixels next to the cluster, we get a long list of isolated clusters we call “islands”.

The amount of data to process is still enormous. We need to boil this down, quickly. Here’s what we did; each cluster that is less than roughly 200 pixels is discarded immediately. In this example, we reduced the amount of islands from well above 150 down to about 5. This is still too much data for us to estimate where the motion is coming from. A last attempt to reduce the number of islands is brute-forced, merging any island covering another island measured by their bounding box (highlighted in green).

The largest island contain the most motion. This is what we are interested in.

5. What should the eyes look at?

This is not really tracking related but in order to get a single point of interest for our 15 eyes to look at we need to be more specific than “this island looks good enough”. Each time we receive a new island to focus on, we place a vector in the center of that island. This is our target. Our current focus vector is then moving closer to that target each frame, using a decay value so that we never actually reach the final target position.

What more can we do with this data?

Since we know how many pixels are changed between each frame, we can set a threshold of how many pixels that needs to contain motion before we proceed. If we never reach that threshold we may consider the current image as empty. No motion = no people = no need to process the image further.

There are many more things going on behind the scene but this should give a glimpse of what we ended up with and why.

SXSW Music Discovery

This year we decided to go to SXSW. It’s been a couple of years since last time, so I really hoped that the artist lineup and music schedule would be more comprehensible than it was last time we went. It wasn’t. To make sense of all this data, it needs personalization. Humans tend to be very specific when it comes to their taste in music, so just having a list of a couple of thousand artist names doesn’t cut it.

Since there were no other services available I figured that even a poor one could help people like ourselves. I started by scraping the SXSW website (5 lines of sloppy php):

Once we got the data from the SXSW website, it needs to be mashed with some Spotify metadata:

Haha I know! This is very poor code. Using MySQL as a cache… It’s amazing how such a good result can be made with such bad practices. I would (probably) never do this for a client, but for a sloppy side project like this – sure.

Anyway. Now when we’ve got all artists playing at the festival, and all the related artists, all we need is for people to sign in using Spotify’s oauth, fetch their top artists and do some id matching in our MySQL database to see what artists to recommend.

An added bonus was to automatically create a playlist in the user’s Spotify account. This required two more API calls, first create an empty playlist and then add all the track uri’s.

Some gotchas in the Spotify API we’ve learned (this might be bad advice, absolutely no idea…):
a) SXSW artists are often “up and coming” and have less than 1000 listens on Spotify. This makes Spotify unsure…


b) Rate limiting is real. When making lots of requests, be sure to sneak in a sleep(1) here and there. Seems to help.
c) Don’t sign requests you don’t need to sign. Since rate limits are counted towards your app id, this is a neat way of sneaking in some extra requests. Stuff like searching for related artists doesn’t currently require authentication.
d) Cache as much as you can before releasing your app. Fetching top tracks for 50 artists synchronously makes you hit the rate limit, and it will also hit your server’s network IO hard if you have a lot of simultaneous users. In this case we had a fixed set of artists, so it made sense to prefetch all their top tracks in a local db.

Sidenote: We deliberately chose to not name this service after SXSW to avoid trademark infringement. It’s called austindiscovery.earthpeople.se.

/Peder

Giphy reactions via SMS on an old CRT

Audience participation during a conference is tricky. You want it to be relevant so it’ll have to be some kind of tech that responds quickly. And at the same time it should be moderated, since the audience can write offensive stuff. Tricky stuff.

For our track at Internetdagarna I’ve built a little something that hopefully does this, and also shows what Creative Technology can be about.

 

  1. Get an old CRT TV. Because it looks cool.
  2. Connect a micro computer (like the C.H.I.P) via an old VCR (you need the VCR to do RF modulation from the micro controller composite output)
  3. Register a phone number on Twilio and forward incoming text messages to a database
  4. Write a few lines of crappy jQuery to poll this database and fetch a Giphy gif based on the text
  5. Run this crappy jQuery in a web browser on the micro controller

This way the audience can text reactions to the TV, and it’ll feel like a nice clash of new and old at the same time.

Post mortem. All this crappy tech + my crappy code made this setup crash every 15 minutes. Still great.

Creative Technology @ Internetdagarna

Just after my blog post about how my family uses Slack, the nice people at IIS asked if I wanted to host a track at Internetdagarna in November. Since the blog post came about while misusing technology, I figured the track should be about this. The name of the track: Creative Technology (because that’s the fancy name for goofing around with web stuff).

screen-shot-2016-10-18-at-15-24-14

Friends and colleagues from the industry will participate, and what we really want is to inspire people to make new stuff using technology. We’ll be showcasing the limitations and possibilities with new platforms and ecosystems and hopefully inspire you to combine them into products and services only you can imagine. We’ll have people talking about prototyping hardware, explain the basics of machine learning, live code a bot and more. Not a lot of detail, more examples and inspiration.

Meet the speakers:
Sonja Petrovic Lundberg
Jakob Öhman
Sanna Frese
Carl Calderon
Magnus Östergren
David Eriksson
Adam Agnaou
Farvash Razavi
Darius Kazemi
Fredrik Mjelle
Maria Starck
Christian Heilmann
Fredrik Heghammar
Peder Fjällström

After this day you will probably have more than a few ideas of stupid things you want to learn and/or build stuff with. You don’t need to be a developer to attend, but being childish does help.

Get your ticket here:
https://internetdagarna.se/program/creative-technology/

Use the code 2016IND to get 20% off your ticket.

/Peder

PS. People who like this also likes this.

Stupid Hackathon Sweden

In February 2016 an event took place in New York called “STUPID SHIT NO ONE NEEDS & TERRIBLE IDEAS HACKATHON”. The stuff from this hackathon both made me giggle and gave me hope.

The last 10 years on the web has been mostly about Salesforce integrations, paywalls, content marketing, gulp-or-grunt, webscale NoSQL and a/b-testing. This is all good, and our company is built on money from stuff like this. But I remember when the Internet was something else. When everyone with a Geocities-account made weird stuff no one had ever thought of. Meaningless stuff no one needs. Terrible ideas. It was an innocent and beautiful time.

Back in February, when my Twitter feed suddenly filled up with 3D cheese printers and Tinder for babies, it felt like I could breathe freely again.

screen-shot-2016-10-17-at-10-28-39

In February, the first Swedish version will take place.
I hope it will at least make you giggle too.

www.stupidhackathon.se / twitter.com/stupidhack_sv

(All hacks can be found in this GoogleDoc)

/Peder

Time tracking via Slack

Time tracking in general sucks. And we don’t timetrack unless we work in projects that we bill by the hour. Since we live our lives in Slack, and are pretty decent developers, I figured we could make this suck slighty less.

screen-shot-2016-10-12-at-10-50-50

Each channel has a command called /timedude which takes a few options:
/timedude add 1h added spacer.gif (adds 1 hour for today along with a comment)
/timedude list (lists your own activity)
/timedude listall (lists everyones activity in this channel)
/timedude export (responds with a url to a csv export)

A new addition to Timedude is integration with Buddy.works, which is a git and CI/deploy platform we just started using. An hour or so, after the end of the workday, Timedude will check all git repos for commits, and if someone has made a commit to a repo and not reported this in the corresponding Slack channel, Timedude will ping the committer on Slack.

You are free to take our code and use it as you see fit. It doesn’t come with support, but it’s only like 200 lines of code in total so you can probably guess how it’s supposed to work.

(The code does very much suck, but it works. Refactor coming up. Any day now…)