Tag Archives: Web

Online security basics: Clicking & downloading

Screenshot of Security Now video episodeNow our child is using computers and the web more and more, I’ve been thinking a lot about protecting children on the internet. There seems to be an endless list of things you should and shouldn’t do but I was struck by some simple advice in the latest Security Now episode (#507) that provides a lot of protection to start you off.

Regarding clicking links in emails, from 1:39:10 in the show Steve Gibson makes the distinction between mails that you’re expecting and mails that you’re not. In other words:

Don’t click links in emails that you weren’t expecting.

For example…

  • Probably safe: You register on a website and then get a confirmation email from them.
  • Probably safe: Your dad is looking to buy a motorbike and sends you a link to one on eBay.
  • Possible evil trap: An email from PayPal asks you to verify your details. To stay safe, you should go to PayPal’s site directly without clicking the email link.

Steve then goes on to mention another security expert, Brian Krebs, with this piece of advice:

Don’t download something you didn’t go looking for.

Super-sensible advice that actually works offline as well, for example in not signing up to financial offers and deals that you weren’t previously considering. Brian also has more basic rules for online safety that I recommend.

So there you go kids, follow these two rules and you’ll save yourself — and your nervous parents — a lot of trouble:

  1. Don’t click links in emails that you weren’t expecting.
  2. Don’t download something you didn’t go looking for.

Screen readers 1, humans 0

Spoken version: MP3 | OGG

Yesterday I sent out a one-question survey on Twitter about screen readers. I’d had this fantastic idea for a website plugin that would enable blog editors to easily record an audio version of their blog posts when publishing [1]. Vision-impaired users would rejoice, website traffic would shoot up and I’d be rich and famous. To double-check, I asked the following question which was retweeted and replied to by several kind souls:

I expected replies along the lines of “stupid question, of course a human recording is better but we have no choice”. How wrong I was.

If you’re not sure how a blind or low-vision user can “read” a webpage, listen to the following short video showing a screen reader in action.

When I’ve seen friends use screen readers it’s always struck me as sounding difficult to understand and likely to get annoying after a while. Where’s the warmth or personality in that mechanical voice?! But I was enlightened as to how incredibly empowering this technology is and the replies I got were pretty unanimous — screen readers win hands down.

Benefits of screen readers

  • Ability to control such as jumping forwards, backwards or elsewhere in the page
  • Ability to check spelling
  • Ability to listen by word
  • Ability to change reading speed (humans are too slow)
  • Ability to follow links

Some of these are also possible with JavaScript, e.g. changing the speed of an audio file, but it would add an extra UI to the interface that the user is already familiar with. I suppose it’s similar to my beloved Kindle — I now feel the limits of books printed on paper compared to ebooks with their dictionary lookups, highlighting, linking, etc.

The only non-unanimous opinion was for fiction where a human narration is sometimes preferred for continuous reading and to feel emotion.

So another day older, another day wiser and thanks to everyone who sent comments.

Footnote

  1. When editing a blog post, it should be possible to use getUserMedia to record the audio, send that to the Web Audio API and export it as a WAV file using the Recorder.js library. Then get the WAV file transcoded to OGG or MP3 via an online API such as Encoding.com (no affiliation) and uploaded to cloud storage. The resulting URL could be inserted into an audio element at the top of the blog post, a bit like this one.

Responsive is not just visual: Three useful web APIs

A mobile user using a laptop outside.Mobile users are on the go. They’re rushing to get a train or busy meeting people. They don’t have much time and they’re on a slow connection. Right?

The answer of course is not necessarily. So-called mobile browsing could include shopping online while tucked up in bed, sending messages while watching TV or giggling at silly photos while relaxing in a cafe. It could also include being out and about but not using a mobile device.

In reality we have no idea what the users of our website are doing or where they are, and it would be a huge invasion of privacy if we did! Our fancy responsive designs should therefore not only look good but also be flexible enough to handle a variety of situations.

In other words, responsive doesn’t mean responding to screen size or even device capabilities. It means responding to the user’s environment (as much as possible).

But enough chatter. How can we do this in practice?

There are three handy web technologies that take us part of the way there: the Battery Status API, the Network Information API and Ambient Light Events. Support for all of them is mostly Chrome and Firefox for now, but keep an eye on caniuse.com and mobilehtml5.org for the latest support info. And now, on to the APIs…


Battery Status API

Why use it

Knowing whether your user is plugged in or not and whether their battery has much juice left can influence how your site reacts. Battery-draining features such as repeated actions and animations can be reduced or disabled, for example. Or you could notify the user and offer to save the current state in case there’s a sudden shutdown.

How to use it

The spec had a rewrite recently and now uses shiny new promises. The advantage of this is that the API is asynchronous, meaning that when you call the getBattery() method the browser makes sure the BatteryManager object is ready before you try to read its attributes. Those attributes are:

  • charging (a boolean)
  • chargingTime (in minutes)
  • dischargingTime (in minutes)
  • level (a number between 0 and 1)

Each of these attributes has an event so you can listen for when they change. In practice, you could use the attributes and events like this:

// Prepare a function to display the battery status
function showStatus(battery) {
  battery.onchargingchange = function () {
    console.log('Charging: ' + battery.charging);
  };
  battery.onchargingtimechange = function () {
    console.log('Charging time remaining (mins): ' + battery.chargingTime);
  };
  battery.ondischargingtimechange = function () {
    console.log('Discharging time remaining (mins): ' + battery.dischargingTime);
  };
  battery.onlevelchange = function () {
    console.log('Battery level: ' + battery.level);
  };
}

// Check for browser support first
if (!!navigator.getBattery) { // The latest API is supported
  // Use the battery promise to asynchronously call showStatus()
  navigator.getBattery().then(function(battery) {
    showStatus(battery);
  });
} else if (!!navigator.battery) { // The old API is supported
  var battery = navigator.battery;
  showStatus(battery);
}

Guille Paz has made a nice battery status demo which includes code for the old and new versions of the spec.

Status

This API has the best support of the three with Opera, Chrome and Firefox having implemented it. In the case of Firefox, the implementation currently uses an old version of the spec so for the time being it’s best to allow for both versions in your code.


Network Information API

Why use it

You’re probably aware of the navigator.onLine HTML5 attribute and its wide browser support but that only tells us if the user is connected to a network or not. For more detailed information about the network we need the aptly-named Network Information API. With the data it provides you could opt to show smaller or lower-quality images to users on slow connections, or only show a video background when you know the network speed is fast. Be careful when making assumptions though — wifi doesn’t necessarily mean fast.

How to use it

When fully implemented this provides the type and speed of connection using two attributes (type and downlinkMax) on the global navigator object. Let’s jump straight into an example…

// Some browsers use prefixes so let's cope with them first
var connection = navigator.connection || navigator.mozConnection || navigator.webkitConnection;

// Check for browser support
if (!!connection) {
  // Get the connection type
  var type = connection.type;

  // Get the connection speed in megabits per second (Mbps)
  var speed = connection.downlinkMax || connection.bandwidth;
}

So easy! The type values are a pre-defined selection of self-explanatory strings:

  • bluetooth
  • cellular
  • ethernet
  • none
  • wifi
  • wimax
  • other
  • unknown

Network speed, when available, is exposed as downlinkMax, previously called bandwidth, although in reality this is difficult for browsers to measure so it may be a while before we’re able to use it. You can also attach a change event listener to navigator.connection to be even more responsive.

For a more thorough look at this API and its background, Aurelio De Rosa has written a good tutorial and network information demo which I recommend.

Status

In the browsers I tested only connection.type was supported properly and that was only in Chrome for Android and Firefox Mobile/OS (you may need to ensure dom.netinfo.enabled is set to true). It’s still early days for this API but its simplicity means it could easily be incorporated into your website or app.

Note: There is a version of the spec hosted on w3.org that currently says work on it has been discontinued. This refers to an older version and current work is being done in the GitHub-hosted version, which should eventually migrate to the W3C site.


Ambient Light Events

Why use it

We’re probably all familiar with struggling to read a screen in bright sunlight. Increasing the contrast of what’s displayed on the screen or hiding distracting backgrounds can make content much easier to read in such cases. The opposite is true — reducing how vivid a design is can avoid users screaming and covering their eyes when in a dark environment.

How to use it

Tomomi Imura, AKA @girliemac, has the definitive lowdown on how to respond to differing levels of light. Eventually we’ll all be able to use CSS4 media queries so when the light-level is dim, for example, we can respond accordingly. In the meantime though, there’s the more precise JavaScript approach which gives you access to data from the device’s light sensor. You just listen for a devicelight event and use that event’s value attribute to get the brightness of the user’s surroundings measured in lux. For example:

window.addEventListener('devicelight', function(e) {
  var lux = e.value;

  if (lux < 50) {
    // ambient light is dim so show lower-contrast version
  }
});

See Tomomi's article for a more detailed example with added CSS and a link to her ambient light demo on CodePen.

Status

At the time of writing support is only available in Firefox and Chrome beta for Android but here's a short video of her code in action:


Of course, these are not the only web technologies that are part of responsive web design but if you want to show the world that you know more than just media queries, they're a good place to start.

Song: Waiting to Load ♪

A short song about the frustrations of slow websites, especially on mobile.

Go to the song’s SoundCloud page to download, leave a comment, etc.

Lyrics

My heart is pounding as I step off the train
So excited to finally see you again.
I forgot the address of our meeting place tonight
So I get out my phone and check the website.

But there's nothing on the screen, nothing at all
No words appear as I scroll
I picture you sitting alone
As I wait for this website to load
Tired and frustrated
Scared I won't make it
And still this website won't load

Credits

The icon in the cover image is from the wonderful iconmonstr.com.

Song: On The Web ♪

The other day Joe Leech tweeted that he’s “thinking about writing a sit-com about web designers. Best friends Mark Down and Rich Snippets and their hilarious adventures.” Well a sitcom needs a cheesy theme tune so here’s my crack at writing a short song for it. Now all we need are a script, location, actors, production staff…

Go to the song’s SoundCloud page to download, leave a comment, etc.

Lyrics

Monday morning, climb the stairs and reach my desk.
Inbox going crazy with so many change requests.

Just another day at our web design agency
Without the clients this would surely be
The perfect industry

We spend hours in front of a screen
Fuelled by sandwiches and caffeine
But it's worth it 'cause across the globe, our work will soon be seen
On the web

There's a new frontier
On the web
Nice bunch of people here
On the web
You can join us too
On the web
Yeah, it must be true
It's on the web, on the web, on the web.
On the web.

Credits

The globe icon in the cover image is from the wonderful iconmonstr.com.

On Responsive Images — An interview with Dr. Stanley Dards

Wanting to serve different size images to different size screens is nothing new but at last the web has a practical solution for responsive images and art direction. Thanks to a lot of hard work by a lot of people, the Responsive Images Community Group has achieved the goal of seeing its API becoming a valuable building block of the web.

The <picture> element and its related srcset and other attributes are already being incorporated in Blink, Gecko and WebKit, have been included in the HTML5 validator and are part of the WHATWG HTML5 spec. A triumph of common sense and cooperation — hurrah!

To celebrate this, I was lucky enough to interview Dr. Stanley Dards, the wise old man of the web, to get his unique insight on the topic.

And here’s a good practical summary of responsive images and how you can use them today.

The Internet, the Web and an old book

Cover of Running Linux bookNot long ago, I was explaining to a translator the difference between the Internet and the Web. Understandably they thought they were the same thing, as most people do.

Jump forward a few weeks and I’m packing boxes ready to move house, wondering what I can throw out. A dusty edition of Running Linux from 1996 — surely that can go, being so out-of-date? But flicking through it I noticed a chapter devoted to “The World Wide Web and Mail” and this little gem:

The WWW provides a single abstraction for the many kinds of information available from the Internet.

And there you have it. Much more succinct than my long-winded attempt at explaining the difference. But the part that made me smile was this:

The World Wide Web (WWW) is a relative newcomer to the Internet information hierarchy. The WWW project’s goal is to unite the many disparate services available on the Internet into a single, worldwide, multimedia, hypertext space. Although this may seem very abstract to you now, the WWW is best understood by using it.

A page from Running LinuxReading this 17 years after it was written, it almost seems quaint — it’s hard to imagine now that readers of a technical manual would not know what the Web is. And yet because the book assumes no previous knowledge it manages to teach a concept in a way that’s clear and stands the test of time.

Who says technical books lose their value as they get older?

Joining the W3C!

W3C logoI’m delighted to announce that from today, I’m officially joining the W3C!

After four wonderful years at Opera it was difficult to think what could follow it but being able to contribute to the open web as part of the W3C and still remain in Japan is as close to a dream-come-true as it gets.

Keio University Shonan-Fujisawa campusI’m based at Keio University’s Shonan-Fujisawa campus (SFC) which is a haven of trees and ducks just outside Tokyo. The new colleagues I’ve met or spoken with both in Japan and scattered around the world have been super welcoming so I feel at home already.

The work I’ll concentrate on will be a mixture of communicating with developers, businesses and media about the W3C’s mission and progress, together with more focussed work coordinating with device-related groups such as web and TV, automotive, signage, etc.

There’s not much more to say other than a big thanks for the kind wishes and support I’ve received to get here. I hope I can do my bit to reinforce the web as The Platform For All.

Examples of digital signage (videos)

Digital signage is a form of electronic display that shows television programming, menus, information, advertising and other messages.

Digital signage is a growing industry around the world but there is particularly strong interest here in Japan and neighbouring South Korea. Most recently, local businesses are focussing more and more on digital signage built with web technologies. However there seems to be some concern that existing standards (HTML5, CSS3, SVG, etc.) don’t address all the issues for web-based signage to really take off. For this reason, the Web-based Signage Business Group was set up as part of the W3C community.

In order to fully understand what’s required before more specific action is taken, use-cases and real-world examples are necessary. It’s often underestimated just how varied digital signage can be so here I’d like to show a few interesting examples from the Far East.

Funky-sized 360-degree display

The resolution of this display in Hikarie, Tokyo, is about 4,000 x 100 pixels. It makes designing for mobile devices look like child’s play! The content is mostly informational consisting of a floor guide, event information, a horizontal clock, etc. shown repeatedly. The back-to-back displays are sometimes in sync and sometimes show differing content.

Side-by-side advertising and info displays

Also from Tokyo is this contrast of digital signage usage. The left-hand screen shows a selection of 15-second and 30-second adverts based on a pre-determined order and frequency. The right-hand screen shows information about the train which is shown in timed mini-loops e.g. Japanese first, English second, but these are interrupted by external triggers such as the train arriving at a station.

Transparent product display

I couldn’t resist this South Korean example — it’s so cool! It was spotted by Rich Tibbett in Seoul and although it seems to be presumably a simple timed loop, it has a few points of interest. Obviously the transparent screen is one, but also the use of a skewed video whose timing has to be perfectly synchronised with the accompanying animation.

Addition: Vending machine with signage

As suggested by Karl in the comments, here’s a digital vending machine with a discreet built-in camera just above the screen. Normally it operates as a sign showing video adverts but when it detects a person in front of it, the video overlays disappear and it operates as a regular, albeit animated, vending machine. I did hear an unnerving rumour that it has the ability to detect characteristics (a person’s rough age, gender, etc.) to customise how it displays content but I have no solid information.

If you know of other interesting examples, feel free to leave a link in the comments section.

How to convert videos to WebM with FFmpeg/AVConv

After lots of trial and error each time I convert a video to WebM, I finally got around to posting this so I don’t forget next time. In a nutshell, here’s the conversion command that works for me:

avconv -i myvideo.mp4 -acodec libvorbis -aq 5 -ac 2 -qmax 25 -threads 2 myvideo.webm

What is this doing? Let’s go through it bit by bit. Assuming we have a video called myvideo.mp4, the simplest way to convert to WebM is with this little line:

avconv -i myvideo.mp4 myvideo.webm

Easy, but the quality will likely be rubbish hence the use of a few flags. The flags can be divided into three sorts: audio, video and the transcoding itself.

Audio flags

FFmpeg/AVConv outputConcentrating on the audio first, we should specify the audio codec which for WebM is Ogg Vorbis: -acodec libvorbis

In later versions the audio codec is Ogg Vorbis by default but personally I specify it just in case.

The quality can be adjusted with the -aq flag from -1 to 10 with a higher number meaning better quality. I’ve found 4 or 5 to be more than adequate.

The number of channels, e.g. mono (1) and stereo (2), is controlled with the -ac flag.

Video flags

Moving on to the video quality and thankfully it’s nice and simple. Like the audio, we can specify a quality level. With the libvpx library used for WebM, this is actually a quantization level, set with the -qmin and -qmax flags ranging from 0 to 51. In my tests, setting qmin makes no difference so I ignore it. Setting qmax effectively controls the maximum compression that will be applied to each frame. In other words, a higher qmax means higher compression, which results in lower quality and smaller file size. Obviously you should adjust this to whatever’s best for your circumstances, but I’ve found 25 to be a good starting point.

Note that with both the audio and video, setting flags for bitrate (-ab for audio, -b for video) makes little or no difference. Instead, setting quality flags indicates the use of a variable bitrate.

Transcoding flags

Finally, I tend to also use the -threads flag. This simply sets how many CPU threads to use when transcoding. If your PC has multiple cores then a higher number means faster processing but with less spare capacity for running other programs. Incidentally it’s also possible to do 2-pass encoding with WebM using the -pass flag.

FFmpeg naming confusion

Note that due to what seem to be political reasons, using the ffmpeg command in Ubuntu results in a sad message.

Was:

*** THIS PROGRAM IS DEPRECATED ***
This program is only provided for compatibility and will be removed in a future release. Please use avconv instead.

Now:

ffmpeg: command not found

It turns out that FFmpeg is very much alive, as is Libav (avconv), both with similar goals but with slightly different ways of doing it. I don’t follow the details and arguments but for practical purposes, using either the ffmpeg or avconv command is fine for converting videos to WebM with the above flags (at the time of writing). Of course, this may change eventually but for the sake of regular users, I hope not.

Unfortunately whatever FFmpeg/Libav disagreement there was has resulted in ffmpeg being removed from Ubuntu and possibly other Linux distros. For the transcoding commands in this post at least, the parameters are the same so if you have problems using avconv try with ffmpeg and vice versa.