Ok this can be done with PhantomJS and it is probably better/simpler/etc. But if you, for any reason, can’t use PhantomJS to make screengrabs, here’s an over complicated alternative we came up with.
wkhtmltopdf is an easy way of making screengrabs of webpage:
$ wkhtmltopdf “http://google.com” googlescreendump.pdf
Most package managers (at least Rpm and Aptitude) has it, and it just works. Some arguments may or may not work, depending on version number, and this example is based on the 0.9.9 which Aptitude offers right now.
While this works fine on your local machine, it won’t on a headless server, so we install Xvfb which acts a virtual X-server.
$ xvfb-run –server-args=”-screen 0, 1024x768x24″ wkhtmltopdf googlescreendump.pdf
This will give you a pdf, probably with the webpage not filling the entire page. To get a usable image file, you need to run it through some Image Magick:
$ convert googlescreengrab.pdf -trim -gravity southeast -background none -splice 50×150 googlescreengrab.png
The arguments need to be tweaked for your setup, so you get the entire screengrab and nothing else.
In total, this is what the command looks like:
xvfb-run –server-args=”-screen 0, 1024x768x24″ wkhtmltopdf -s A1 -B 0 -L 0 -R 0 -T 0 –redirect-delay 3000 “http://google.com” googlescreengrab.pdf && convert googlescreengrab.pdf -trim -gravity southeast -background none -splice 50×150 googlescreengrab.png
So in short. Run PhantomJS if you can.
If not, this will make your client happy.
And make you feel dirty.
On Earth People, we recently fell in love with Slack (Oh yes that’s an affiliate link that will give us both 100 dollar credit). It got us off Skype (which we didn’t really love anyway) and on to something that felt fresh. What really got us hooked was all the integrations we could do. Here’s a couple we’ve done so far:
1. When the doorbell rings, an Arduino triggers a HTTP request to some shitty PHP script we’ve written, which checks GoogleCal for meetings and pings the chatroom with what’s at the door.
2. Lunch is obviously a big thing for us hoomans. We’ve made a little Curl based bot which checks what the nearby restaurants serve for lunch by typing /zum or /stadsmissionen.
3. Whenever someone deploys to a production server, Slack will let everyone know.
4. /reddit <any word> will try to fetch an image from a corresponding subreddit. And post it to everyone… yeah, we’ll see how long this one gets to live. Depends on when someone starts fetching NSFW stuff I guess…
Right now we’re working on a crappy little Raspberry-integration for our Moccamaster. Wouldn’t it just be great to be able to ask Slack if there’s any coffee…
A client asked us what could be done, technically, with Snapchat. You may have heard about Snapchat this fall. It apparently has millions of users and are refusing to sell out to Facebook for billions of dollars.
Many of us at Earth People have been using Snapchat for a few months, and like it alot. It’s fun! Try it!
Technically then, what can be done? Not a lot, it turns out. There’s no API and the Terms and Conditions clearly prohibits use of the service from outside their apps.
We couldn’t care less about the legal stuff. Their internal API has already been reverse engineered, and making a quick proof of concept was easy.
From now on you can follow our Snapchat-user earthpeople-git to see a “Story” of our latest GIT commit messages. Would I advise a client to rely on this technology? No. Is it fun? Yeah kinda.
Red Bull asked us to come up with something mobile and social for their Red Bull Weekender event in Stockholm. We love music, and planned on going to this event anyway, so figuring out the functionality was very fun and actually pretty easy.
For non FB-connected users we pretty much show a lineup, in which the user can star certain gigs. In addition to this we use the GPS to show what the nearest gig is.
FB-connected users get a deeper experience, in which the user’s friends stars pops up under each event.
We also wanted to give the user some pointers on which gigs to go to. A lot of the artists are fairly new and unknown for the general public, so getting some pointers couldn’t be bad right? We decided to fetch all the user’s FB likes from the dawn of time and cross reference these for matches with similar artists in the lineup. It won’t be 100% correct for most users, but it’s a nudge in the right direction.
Every time someone visits the site, we save their coordinates. This data is then used to make a heat map on top of Google Maps, so everyone can see where the most action is at the moment.
When creating a REST-API for a client we ran across this really weird problem.
IE8/IE9 can’t make XMLHttpRequest to other domains via CORS. Microsoft invented their own solution for this problem, called XDR. The problem with XDR is that it doesn’t send a Content-type when doing POST requests, defaulting this value to “text/plain” instead.
PHP populates the $_POST array with data, only when the Content-type is set to “application/x-www-form-urlencoded” or “multipart/form-data”. So $_POST is just an empty array, unless you make some kind of workaround.
Here’s a stripped down version on how we solved it.
From time to time, we’re asked to fix broken sites built by other agencies. This can be extremely tricky, but if it’s technologies we know and love (PHP, MySQL, Apache, Memcached, WordPress, CodeIgniter, Laravel, Slim, etc) we usually say yes.
If a site keeps falling every n hours/days, I start by checking if there are any cronjobs around. In this example we’ll pretend that we have a cronjob running a PHP-script every minute, creating an index of all articles in the database. The first few months, this wasn’t a problem as there weren’t that much content to index. Running the cronjob took 10-15 seconds.
6 months later, there are many more articles in the database, and the index takes longer to build. All of a sudden it takes more than 1 minute to complete, and now things get hairy. After 1 minute we have 2 jobs running, and after a few hours we have hundred of cronjobs running. Eventually the server won’t have any more memory to go around, and it’ll crash.
The solution is simple, and has been around in the UNIX world since forever: implement a lock file.
I’ve left out one little bit in the gist above: If the server goes down for reboot while the script is running, there will be a lock-file preventing new crons to run. How you handle this is up to you, for me this differs from time to time. Instead of just creating an empty lock file, you could write the process id (PID) to the lock file instead and use this to check if the script really is running.
Scaling servers is hard work, and neither of us comes from a devops background. Our approach to scaling starts well before that – with every decision a developer take when structuring the app.
For a campaign site we made recently, we expected huge amounts of traffic. The site itself was one of those one-page things, but had alot of dynamic elements; Instagram pictures with a certain hashtag from a specific area, tweets from a predefined list of users, tracking data for 10 different objects rendered on a map, and a countdown, which needed server time in seconds to be accurate.
Budget was tight and we wanted the server environment to be dead simple.
Amazon S3 to the rescue. We set up the S3 bucket for static site hosting, pointed a domain to it and were on our way. The dynamic content then? And the server timestamp to create the countdown?
Timestamp then? We could just make a simple AJAX request to the admin server, right? Getting the server date is a very cheap request? Nah. we reused the HTTP Header from the AJAX request for polling the dynamic content. The Date-header served by S3 with all responses is accurate and predictable.
This setup means that as long as Amazon S3 can take the load, the campaign is fine. If the admin box goes down for some reason, the site will still work but with old content.
Unfortunately we can’t tell you which site it is, as we are prevented by an NDA.
Ask us more if you’re interested.
We recently launched andtherev.com, a new kind of bike shop/factory on Södermalm in Stockholm. The client wanted website users to be able to participate on the site, so instead of using a CMS to fill the grid with content we used Twitter and Instagram.
Each page on the site contains a mix from Instagram and Twitter. The hashtags used are quite unique, and doesn’t (yet) contain any junk. The plan is to use these social media channels to maintain the site, and to engage customers to take part.
Well ok, there is a CMS involved. We automatically create WordPress posts by making scheduled requests to Instagram and Twitter. This is to speed up the experience for the user and to be able to hide offensive posts.
In the future we plan on creating a page which only picks up stuff from the area around the store.
Sweden Social Web Camp is currently taking place on the island of Tjärö, Sweden. The entire social media elite (except for me höh) is there. To keep up with what they’re doing, i hereby showcase our WordPress plugin EP HashImage.
Below we’re displaying images from the hashtag SSWC, by scraping twitpic, instagram, yfrog, plixi, flickr, pic.twitter.com.