IE10 and Beyond: Unifying Touch and Mouse Made Easy with Pointer Events
I often get questions from developers like, “with so many touch-enabled devices on phones and tablets, where do I start?” and “what is the easiest way to build for touch-input?” Short answer: “It’s complex.” Surely there’s a more unified way to handle multi-touch input on the web – in modern, touch-enabled browsers or as a fallback for older browsers. In this article I’d like to show you some browser experiments using MSPointers – an emerging multi-touch technology and polyfills that make cross-browser support, well less complex. The kind of code you can also experiment with and easily use on your own site.
First of all, many touch technologies are evolving on the web – for browser support you need to look the iOS touch event model and the W3C mouse event model in additional to at MSPointers, to support all browsers. Yet there is growing support (and willingness) to standardize. In September, Microsoft submitted MSPointers to the W3C for standardization and yesterday, we’ve already reached the Last Call Draft: http://www.w3.org/TR/pointerevents. The MS Open Tech team also recently released an initial Pointer Events prototype for Webkit.
The reason why I experiment with MSPointers is not based on device share – it’s because Microsoft’s approach to basic input handling is quite different than what’s currently available on the web and it deserves a look for what it could become. The difference is that developers can write to a more abstract form of input, called a “Pointer.” A Pointer can be any point of contact on the screen made by a mouse cursor, pen, finger, or multiple fingers. So you don’t waste time coding for every type of input separately.
The Concepts
We will begin by reviewing apps running inside Internet Explorer 10 which exposes the MSPointer events API and then solutions to support all browsers. After that, we will see how you can take advantage of IE10 gestures services that will help you handling touch in your JavaScript code in an easy way. As Windows 8 and Windows Phone 8 share the same browser engine, the code & concepts are identical for both platforms. Moreover, everything you’ll learn on touch in this article will help you do the very same tasks in Windows Store apps built with HTML5/JS, as this is again the same engine that is being used.
The idea behind the MSPointer is to let you addressing mouse, pen & touch devices via a single code base using a pattern that match the classical mouse events you already know. Indeed, mouse, pen & touch have some properties in common: you can move a pointer with them and you can click on element with them for instance. Let’s then address these scenarios via the very same piece of code! Pointers will aggregate those common properties and expose them in a similar way than the mouse events.
The most obvious common events are then: MSPointerDown , MSPointerMove & MSPointerUp which directly map to the mouse events equivalent. You will have the X & Y coordinates of the screen as an output.
You have also specific events like: MSPointerOver , MSPointerOut , MSPointerHover or MSPointerCancel
But of course, there could be also some cases where you want to address touch in a different manner than the default mouse behavior to provide a different UX. Moreover, thanks to the multi-touch screens, you can easily let the user rotate, zoom or pan some elements. Pens/stylus can even provide you some pressure information a mouse can’t. The Pointer Events will still aggregate those differences and will let you build some custom code for each devices’ specifics.
Note: it would be better to test the following embedded samples if you have a touch screen of course on a Windows 8/RT device or if you’re using a Windows Phone 8. Still, you can have some options:
1. Get a first level of experience by using the Windows 8 Simulator that ships with the free Visual Studio 2012 Express development tools. For more details on how this works, please read this article: Using the Windows 8 Simulator & VS 2012 to debug the IE10 touch events & your responsive design .
2. Have a look to this video also available in other formats at the end of the article. The video demonstrates all the below samples on a Windows 8 tablet supporting touch, pen & mouse.
3. Use a virtual cross-browsing testing service like BrowserStack to test interactively if you don’t have access to the Windows 8 device. You can use BrowserStack for free for 3 months, courtesy of the Internet Explorer team on modern.IE.
Handling simple touch events
Step 1: do nothing in JS but add a line of CSS
Let’s start with the basics. You can easily take any of your existing JavaScript code that handles mouse events and it will just work as-is using some pens or touch devices in Internet Explorer 10. IE10 is indeed firing mouse events as a last resort if you’re not handling Pointers Events in your code. That’s why, you can “click” on a button or on any element of any webpage using your fingers even if the developer never thought that one day someone would have done it this way. So any code registering to mouvedown and/or mouseup events will work with no modification at all. But what about mouvemove?
Let’s review the default behavior to answer to that question. For instance, let’s take this piece of code:
<!DOCTYPE html>
<html>
<head>
<title>Touch article sample 1</title>
</head>
<body>
<canvas id="drawSurface" width="400px" height="400px" style="border: 1px dashed black;">
</canvas>
<script>
var canvas = document.getElementById("drawSurface");
var context = canvas.getContext("2d");
context.fillStyle = "rgba(0, 0, 255, 0.5)";
canvas.addEventListener("mousemove", paint, false);
function paint(event) {
context.fillRect(event.clientX, event.clientY, 10, 10);
}
</script>
</body>
</html>
It simply draws some blue 10px by 10px squares inside an HTML5 canvas element by tracking the movements of the mouse. To test it, move your mouse inside the box. If you have a touch screen, try to interact with the canvas to check by yourself the current behavior:
Default Sample : Default behavior if you do nothing Result : only MouseDown/Up/Click works with touch. i.e. You can only draw blue squares with touch when you tap on the screen, not when you move your finger on the screen. |
You’ll see than when you’re moving the mouse inside the canvas element, it will draw some series of blue squares. But using touch instead, it will only draw a unique square at the exact position where you will tap the canvas element. As soon as you will try to move your finger in the canvas element, the browser will try to pan inside the page as it’s the default behavior being defined.
You then need to specify you’d like to override the default behavior of the browser and tell it to redirect the touch events to your JavaScript code rather than trying to interpret it. For that, target the elements of your page that shouldn’t react anymore to the default behavior and apply this CSS rule to them:
-ms-touch-action: auto | none | manipulation | double-tap-zoom | inherit;
You have various values available based on what you’d like to filter or not. You’ll find the values described in this article: Guidelines for Building Touch-friendly Sites
The classic use case is when you have a map control in your page. You want to let the user pan & zoom inside the map area but keep the default behavior for the rest of the page. In this case, you will apply this CSS rule (-ms-touch-action: manipulation) only on the HTML container exposing the map.
In our case, add this block of CSS:
<style>
#drawSurface
{
-ms-touch-action: none; /* Disable touch behaviors, like pan and zoom */
}
</style>
Which now generates this result:
Step 1: just after adding -ms-touch-action: none Result: default browser panning disabled and MouseMove works but with 1 finger only |
Now, when you’re moving your finger inside the canvas element, it behaves like a mouse pointer. That’s cool! But you will quickly ask yourself this question: why does this code only track 1 finger? Well, this is because we’re just falling in the last thing IE10 is doing to provide a very basic touch experience: mapping one of your fingers to simulate a mouse. And as far as I know, we’re using only 1 mouse at a time. So 1 mouse == 1 finger max using this approach. Then, how to handle multi-touch events?
Step 2: Use MSPointer Events instead of mouse events
Take any of your existing code and replace your registration to “mousedown/up/move” by “MSPointerDown/Up/Move” and your code will directly support a multi-touch experience inside IE10!
For instance, in the previous sample, change this line of code:
canvas.addEventListener("mousemove", paint, false);
to this one:
canvas.addEventListener("MSPointerMove", paint, false);
And you will get this result:
Step 2: using MSPointerMove instead of mousemove Result: multi-touch works |
You can now draw as many squares series as touch points your screen supports! Even better, the same code works for touch, mouse & pen. This means for instance that you can use your mouse to draw some lines at the same time you are using your fingers to draw other lines.
If you’d like to change the behavior of your code based on the type of input, you can test that through the pointerType
property value. For instance, let’s imagine that we want to draw some 10px by 10px red squares for fingers, 5px by 5px green squares for pen and 2px by 2px blue squares for mouse. You need to replace the previous handler (the paint function) with this one:
function paint(event) {
if (event.pointerType) {
switch (event.pointerType) {
case event.MSPOINTER_TYPE_TOUCH:
// A touchscreen was used
// Drawing in red with a square of 10
context.fillStyle = "rgba(255, 0, 0, 0.5)";
squaresize = 10;
break;
case event.MSPOINTER_TYPE_PEN:
// A pen was used
// Drawing in green with a square of 5
context.fillStyle = "rgba(0, 255, 0, 0.5)";
squaresize = 5;
break;
case event.MSPOINTER_TYPE_MOUSE:
// A mouse was used
// Drawing in blue with a squre of 2
context.fillStyle = "rgba(0, 0, 255, 0.5)";
squaresize = 2;
break;
}
context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
}
}
And you can test the result here:
Step 2b: testing pointerType to test touch/pen or mouse Result: You can change the behavior for mouse/pen/touch but since 2a the code now works only in IE10+ |
If you’re lucky enough to have a device supporting the 3 types of inputs (like the Sony Duo 11, the Microsoft Surface Pro or the Samsung Tablet some of you had during BUILD2011), you will be able to see 3 kind of drawings based on the input type. Great, isn’t it?
Still, there is a problem with this code. It now handles all type of inputs properly in IE10 but doesn’t work at all for browsers that don’t support the MSPointer Events like IE9, Chrome, Firefox, Opera & Safari.
Step 3: Do feature detection to provide a fallback code
As you’re probably already aware, the best approach to handling multi-browsers support is to do features detection. In our case, you need to test this:
window.navigator.msPointerEnabled
Beware that this only tells you if the current browser supports MSPointer. It doesn’t tell you if touch is supported or not. To test support for touch or not, you need to check msMaxTouchPoints.
In conclusion, to have a code supporting MSPointer in IE10 and falling back properly to mouse events in other browsers, you need a code like that:
var canvas = document.getElementById("drawSurface";
var context = canvas.getContext("2d");
context.fillStyle = "rgba(0, 0, 255, 0.5)";
if (window.navigator.msPointerEnabled) {
// Pointer events are supported.
canvas.addEventListener("MSPointerMove", paint, false);
}
else {
canvas.addEventListener("mousemove", paint, false);
}
function paint(event) {
// Default behavior for mouse on non-IE10 devices
var squaresize = 2;
context.fillStyle = "rgba(0, 0, 255, 0.5)";
// Check for pointer type on IE10
if (event.pointerType) {
switch (event.pointerType) {
case event.MSPOINTER_TYPE_TOUCH:
// A touchscreen was used
// Drawing in red with a square of 10
context.fillStyle = "rgba(255, 0, 0, 0.5)";
squaresize = 10;
break;
case event.MSPOINTER_TYPE_PEN:
// A pen was used
// Drawing in green with a square of 5
context.fillStyle = "rgba(0, 255, 0, 0.5)";
squaresize = 5;
break;
case event.MSPOINTER_TYPE_MOUSE:
// A mouse was used
// Drawing in blue with a square of 2
context.fillStyle = "rgba(0, 0, 255, 0.5)";
squaresize = 2;
break;
}
}
context.fillRect(event.clientX, event.clientY, squaresize, squaresize);
}
And again you can test the result here:
Sample 3: feature detecting msPointerEnabled to provide a fallback Result: full complete experience in IE10 and default mouse events in other browsers |
Step 4: support all touch implementation
If you’d like to go even further and support all browsers & all touch implementations, you have 2 choices:
- 1 – Write the code to address both events models in parallel like described for instance in this article: Handling Multi-touch and Mouse Input in All Browsers
- 2 – Just add a reference to HandJS, the awesome JavaScript polyfill library written by my friend David Catuhe, as described in his article: HandJS a polyfill for supporting pointer events on every browser
As I mentioned in the introduction of this article, Microsoft recently submitted the MSPointer Events specification to W3C for standardization. The W3C created a new Working Group and it has already published a last call working draft based on Microsoft’s proposal. The MS Open Tech team also recently released an initial Pointer Events prototype for Webkit that you might be interested in.
While the Pointer Events specification is not yet a standard, you can still already implement code that supports it leveraging David’s Polyfill and be ready for when Pointer Events will be a standard implemented in all modern browsers. With David’s library the events will be propagated to MSPointer on IE10, to Touch Events on Webkit based browsers and finally to mouse events as a last resort. It’s damn cool! Check out his article to discover and understand how it works. Note that this polyfill will also be very useful then to support older browser with elegant fallbacks to mouse events.
To have an idea on how to use this library, please have a look to this article: Creating an universal virtual touch joystick working for all Touch models thanks to Hand.JS which shows you how to write a virtual touch joystick using pointer events. Thanks to HandJS, it will work on IE10 on Windows 8/RT, Windows Phone 8, iPad/iPhone & Android devices with the very same code base!
Recognizing simple gestures
Now that we’ve seen how to handle multi-touch, let’s see how to recognize simple gestures like tapping or holding an element and then some more advanced gestures like translating or scaling an element.
IE10 provides a MSGesture object that’s going to help us. Note that this object is currently specific to IE10 and not part of the W3C submission. Combined with the MSCSSMatrix element (our equivalent to the WebKitCSSMatrix one), you’ll see that you can build very interesting multi-touch experiences in a very simple way. MSCSSMatrix is indeed representing a 4×4 homogeneous matrix that enables Document Object Model (DOM) scripting access to CSS 2-D and 3-D Transforms functionality. But before playing with that, let’s start with the basics.
The base concept is to first register an event handler to MSPointerDown. Then inside the handler taking care of MSPointerDown, you need to choose which pointers you’d like to send to the MSGesture object to let it detect a specific gesture. It will then trigger one of those events: MSGestureTap, MSGestureHold, MSGestureStart, MSGestureChange, MSGestureEnd, MSInertiaStart. The MSGesture object will then take all the pointers submitted as input parameters and will apply a gesture recognizer on top of them to provide some formatted data as output. The only thing you need to do is to choose/filter which pointers you’d like to be part of the gesture (based on their ID, coordinates on the screen, whatever…). The MSGesture object will do all the magic for you after that.
Sample 1: handling the hold gesture
We’re going to see how to hold an element (a simple DIV containing an image as a background). Once the element will be held, we will add some corners to indicate to the user he has currently selected this element. The corners will be generated by dynamically creating 4 div added on top of each corner of the image. Finally, some CSS tricks will use transformation and linear gradients in a smart way to obtain something like this:
The sequence will be the following one:
1 - register to MSPointerDown & MSPointerHold events on the HTML element you’re interested in
2 - create a MSGesture object that will target this very same HTML element
3 - inside the MSPointerDown handler, add to the MSGesture object the various PointerID you’d like to monitor (all of them or a subset of them based on what you’d like to achieve)
4 - inside the MSPointerHold event handler, check in the details if the user has just started the hold gesture (MSGESTURE_FLAG_BEGIN flag). If so, add the corners. If not, remove them.
This leads to the following code:
<!DOCTYPE html>
<html>
<head>
<title>Touch article sample 5: simple gesture handler</title>
<link rel="stylesheet" type="text/css" href="toucharticle.css" />
<script src="Corners.js"></script>
</head>
<body>
<div id="myGreatPicture" class="container" />
<script>
var myGreatPic = document.getElementById("myGreatPicture");
// Creating a new MSGesture that will monitor the myGreatPic DOM Element
var myGreatPicAssociatedGesture = new MSGesture();
myGreatPicAssociatedGesture.target = myGreatPic;
// You need to first register to MSPointerDown to be able to
// have access to more complex Gesture events
myGreatPic.addEventListener("MSPointerDown", pointerdown, false);
myGreatPic.addEventListener("MSGestureHold", holded, false);
// Once pointer down raised, we're sending all pointers to the MSGesture object
function pointerdown(event) {
myGreatPicAssociatedGesture.addPointer(event.pointerId);
}
// This event will be triggered by the MSGesture object
// based on the pointers provided during the MSPointerDown event
function holded(event) {
// The gesture begins, we're adding the corners
if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
Corners.append(myGreatPic);
}
else {
// The user has released his finger, the gesture ends
// We're removing the corners
Corners.remove(myGreatPic);
}
}
// To avoid having the equivalent of the contextual
// "right click" menu being displayed on the MSPointerUp event,
// we're preventing the default behavior
myGreatPic.addEventListener("contextmenu", function (e) {
e.preventDefault(); // Disables system menu
}, false);
</script>
</body>
</html>
And here is the result:
Try to just tap or mouse click the element, nothing occurs. Touch & maintain only 1 finger on the image or do a long mouse click on it, the corners appear. Release your finger, the corners disappear.
Touch & maintain 2 or more fingers on the image, nothing happens as the Hold gesture is only being triggered if 1 unique finger is holding the element.
Note: the white border, the corners & the background image are set via CSS defined in toucharticle.css. Corners.js simply creates 4 DIVs (with the append function) and places them on top of the main element in each corner with the appropriate CSS classes.
Still, there is something I’m not happy with in the current result. Once you’re holding the picture, as soon as you’re slightly moving your finger, the MSGESTURE_FLAG_CANCEL flag is raised and caught by the handler which removes the corners. I would rather like to remove the corners only once the user releases his finger anywhere above the picture or as soon as he’s moving his finger out of the box delimited by the picture. To do that, we’re going to remove the corners only on the MSPointerUp or the MSPointerOut. This gives this code instead:
var myGreatPic = document.getElementById("myGreatPicture");
// Creating a new MSGesture that will monitor the myGreatPic DOM Element
var myGreatPicAssociatedGesture = new MSGesture();
myGreatPicAssociatedGesture.target = myGreatPic;
// You need to first register to MSPointerDown to be able to
// have access to more complex Gesture events
myGreatPic.addEventListener("MSPointerDown", pointerdown, false);
myGreatPic.addEventListener("MSGestureHold", holded, false);
myGreatPic.addEventListener("MSPointerUp", removecorners, false);
myGreatPic.addEventListener("MSPointerOut", removecorners, false);
// Once touched, we're sending all pointers to the MSGesture object
function pointerdown(event) {
myGreatPicAssociatedGesture.addPointer(event.pointerId);
}
// This event will be triggered by the MSGesture object
// based on the pointers provided during the MSPointerDown event
function holded(event) {
// The gesture begins, we're adding the corners
if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
Corners.append(myGreatPic);
}
}
// We're removing the corners on pointer Up or Out
function removecorners(event) {
Corners.remove(myGreatPic);
}
// To avoid having the equivalent of the contextual
// "right click" menu being displayed on the MSPointerUp event,
// we're preventing the default behavior
myGreatPic.addEventListener("contextmenu", function (e) {
e.preventDefault(); // Disables system menu
}, false);
which now provides the behavior I was looking for:
Sample 2: handling scale, translation & rotation
Finally, if you want to scale, translate or rotate an element, you simply need to write a very few lines of code. You need to first to register to the MSGestureChange event. This event will send you via the various attributes described in the MSGestureEvent object documentation like rotation, scale, translationX, translationY currently applied to your HTML element.
Even better, by default, the MSGesture object is providing an inertia algorithm for free. This means that you can take the HTML element and throw it on the screen using your fingers and the animation will be handled for you.
Lastly, to reflect these changes sent by the MSGesture, you need to move the element accordingly. The easiest way to do that is to apply some CSS Transformation mapping the rotation, scale, translation details matching your fingers gesture. For that, use the MSCSSMatrix element.
In conclusion, if you’d like to handle all this cool gestures to the previous samples, register to the event like that:
myGreatPic.addEventListener("MSGestureChange", manipulateElement, false);
And use the following handler:
function manipulateElement(e) {
// Uncomment the following code if you want to disable the built-in inertia
// provided by dynamic gesture recognition
// if (e.detail == e.MSGESTURE_FLAG_INERTIA)
// return;
// Get the latest CSS transform on the element
var m = new MSCSSMatrix(e.target.currentStyle.transform);
e.target.style.transform = m
.translate(e.offsetX, e.offsetY) // Move the transform origin under the center of the gesture
.rotate(e.rotation * 180 / Math.PI) // Apply Rotation
.scale(e.scale) // Apply Scale
.translate(e.translationX, e.translationY) // Apply Translation
.translate(-e.offsetX, -e.offsetY); // Move the transform origin back
}
which gives you this final sample:
Try to move and throw the image inside the black area with 1 or more fingers. Try also to scale or rotate the element with 2 or more fingers. The result is awesome and the code is very simple as all the complexity is being handled natively by IE10.
Video & direct link to all samples
If you don’t have a touch screen experience available for IE10 and you’re wondering how these samples works, have a look at this video where I’m describing all samples shared in this article on the Samsung BUILD2011 tablet:
And you can also have a look to all of them here:
- Simple touch default sample with nothing done
- Simple touch sample step 1 with CSS -ms-touch-action
- Simple touch sample step 2a with basic MSPointerMove implementation
- Simple touch sample step 2b with pointerType differentiation
- Simple touch sample step 3 with MSPointers and mouse fallback
- MSGesture sample 1: MSGestureHold handler
- MSGesture sample 1b: MSGestureHold handler
- MSGesture sample 2: MSGestureChange
Associated resources:
- W3C Pointer Events Specification
- Handling Multi-touch and Mouse Input in All Browsers : the polyfill library that should help a lot of developers in the future
- Pointer and gesture events
- Go Beyond Pan, Zoom, and Tap Using Gesture Events
- IE Test Drive Browser Surface which has greatly inspired lot of the embedded demos
Logically, with all the details shared in this article and associated links to other resources, you’re now ready to implement the MSPointer Events model in your websites & Windows Store applications. You have then the opportunity to easily enhance the experience of your users in Internet Explorer 10.
(dpe)About the Author
David Rousset is a Developer Evangelist at Microsoft, specializing in HTML5 and web development. Read his blog on MSDN or follow him @davrous on Twitter.
Awesome post and the examples really help to digest the content,
Really a great post, helped me a lot, thank you!