Day 2: Chrome Dev Summit 2020

Dec 10, 2020 17:30 · 23107 words · 109 minute read call stack somebody else get

(upbeat music) - Welcome to day two. I’m Mariko from Web Developer Relations Team at Chrome. This is a bit different from the usual Chrome Dev Summit section, which we mostly focus on platform technology. But this time I’m going to share how we’ve been working this year. Just as a reminder, before we start, I will mention ongoing public health crisis, as well as natural disasters and many more social movements for change.

00:54 - And for me, and for I’m sure for many of you too, it is very emotionally charged topic. So please be mindful in comment section. This year prompted a lot of change in the way we look at current status quo and we all needed to adapt to change really quick. And I find myself talking to my friend often and asking them how other teams are dealing with this year. So I wanted to share with you how we’ve been adapting and hope that you also find those conversations interesting. Let’s wind back to March. At the beginning of March, when the threat of COVID-19 in United States was getting more serious, small group of us was supposed to have in-person Hack Week in Boston, because we were in the middle of building tooling report.

01:50 - Our new information site we released in June. Just a week before our scheduled Hack Week, international travel became out of questions. But we couldn’t just not do Hack Week, we had a deadline to hit, and everybody already cleared the calendar. So we decided to try having Hack Week virtually. Our team is split into different time zones, so we try to have being in the same room atmosphere, for few hours a day when all of us are working together.

02:25 - How this works is that we created a video chat room, where everybody can join, and most of the time we mute ourselves and head’s down writing code on our own. And occasionally somebody unmutes and share their playlist. And when you have questions or things that needs to get discussed, you unmute and say, “Hey, what do you think of doing this?” And at that moment, everybody can start discussing and resolve that issue right at the moment. So this setup worked surprisingly well, and this virtual conference room became a frequent activity in our team beyond that Hack Week. Especially early on in our work-from-home journey.

03:10 - There’s also been more intentionally organized non-work virtual events too, like game time, dedicated time to just chat, and course we can’t forget pet photo sharing thread. Other thing that moved to virtual setting is conference and events. And this was big change for us because presenting and attending an in-person event is huge part of what we do as a team. Since we couldn’t gather in person anymore, we shifted to virtual event. But it wasn’t just like record a session and upload it to the internet, right? Usually when we shoot this video for YouTube, we are incredibly privileged to have access to studio space, with professional lighting and camera equipment.

03:57 - But now everybody had to record a session from their own home. So how it works, including this video shoot, is that I’m on a video call with producers and directors, screen-sharing camera remote app on my laptop, which is connected to my camera so that they can make sure all of my recording settings are correct. This is how a video gets made these days. And I just want to mention that we were incredibly lucky to have support of a production team and video editors to be able to switch to the remote recordings and virtual conference setup so smoothly. It is not just video production, there’s comment moderators, persons who prepare closed caption and translations, and many, many more work goes into putting events like this. Going beyond working virtually, this year really highlighted how critical the web is to stay connected, stay informed and do things that keeps our daily life going, right.

04:59 - I’ve seen countless online spreadsheets to track critical resource needs, volunteer sign-ups. I’ve seen my local butcher and bakery adopt online order form and payment system, sometimes overnight. And many of us developers had orders to do something and started building. Like helpmainstreet.com, which let you find local business and buy gift cards or order online while local economy slow down. Or not911, which is a PWA that help you find alternatively resource to resolve community issues without calling the police.

05:35 - And there are many, many more of these, but of course everything happened in such intensity that not everything went smooth. And people experienced a lot of frustration on the web too. In response to COVID-19 pandemic, our team has put together a list of helpful resources in one page, from how to diagnose a performance issue to how to add structural data. So important announcements are properly surfaced on search. It was all originally put together in response to the public health crisis, but I think same guides apply to any project trying to address people’s critical needs.

06:17 - After all, good web development practice is appreciated not only in the regular time, but also in the time of needs. And I bring this up not to say, hey, our team knows the best practices, no. But to remind us of all that doing good web development helps people. I’m based in New York city, epicenter of global pandemic. And one of the major US city where people marched on the street for racial justice. And while everything else is going on, people around me were building something to help others, but I was lost. I didn’t know what I could do and I felt powerless, and I still feel lost. And in case you are also feeling that way, I would like to remind us that what we do day to day as a web developer could be critical action to somebody else. That log-in form you spend time building, or that accessibility feature you added, or that bit of performance tuning you did, do contribute to somebody else life getting a little easier. I’d like to end this session with a little bit of story about how we are reacting to call for equality, particularly for Black+ members of the community.

07:34 - When trying to address systemic issues, it is not just do one thing and fix everything. There are so many things we need to work on and we are working on many projects, which I hope to share with you soon, but I want to call out one particular point about language and words, because even in the most well-meaning team, language to be used can still hurt somebody and contribute to systemic racism. So this year we started to take more direct action in changing the language we use. We started search non-inclusive language in our code base. We started putting inclusive language reminders on our poll request and starting to part away from master launch convention that we’ve been using.

08:22 - And we are rewriting team’s mission statement to put our value into words, clarifying our core values and what we care beyond just technical achievements. This work is just starting now, but I cannot wait to share with you in the future. I’m particularly invested in this topic because I am based in US and I live in New York city so it’s my reality, but I know around the globe, many of you are participating in call for different cause, for disaster relief, for justice, for democracy, safety, and many more. And our team certainly try to be informed, but it’s hard to know as much as the person who is actively participating in it. So if you see something we do as out of place or stepping on something, please call us out.

09:18 - And if you need particular help, please let us know. I know this session was a bit unusual for us, but I hope we keep having these conversations so that it doesn’t feel unusual anymore next year. Thank you for being here. And I look forward to chatting with you in comment section. Enjoy your session stay. (upbeat music) - Hi everyone, I’m Chef Jecelyn from the Chrome DevTools Team. (snaps) This is a plate of fried rice I just made.

09:58 - To prepare this, you need garlic, rice, egg, anchovies, and tomato. These are the great ingredients that make this tasty fried rice. (sniffs) Hm! It smells great too. You know what else is great? Chrome DevTools has great new features to help you better debug your site. Let us dive into these great features together. (snaps) First off, we have G for the great toolings. DevTools now, finally, has further support for CSS grid debugging. Yes! Say you have a grid with three defined areas, banner, sidebar, and content. Here is how the page looks like. In the elements panel, notice that there is a new grid batch added next to the grid element. Click on that to toggle the grid, overlay, and line numbers.

10:49 - You can customize the overlay further in the new layout pane. For example, you can hide the line labels, view the area names, or extend the grid lines for better debugging. You can even change the color of the overlay. What is even greater is, when you animate the grids, the labels will rotate accordingly as well. And that is the detail. Go try it out. Next, the R for the rendering tab and sensors tab.

11:19 - Do you know you can emulate locales by setting a location? For example, I have an upcoming cooking class on 12:00 p.m, Christmas Eve, Malaysia time. I have audience from all around the world. I would love my page to display the class time according to my audience time zone. Now, set the locations to Tokyo and refresh the page. The class time is now updated to 1:00 p.m, Tokyo time. This feature is handy, especially if your site supports multiple language and time zone.

11:55 - Next, we added a new Idle State Emulations to support the Idle Detections API. This API allow you to detect inactive users and react on idle state changes. For example, imagine this page is a rice ordering kiosk. When the screen is locked, you might want to display slideshows of your promotions. When the user is active again, redirect them to the ordering page. These state changes were hard to debug previously because you need to wait for the actual idle state to change. Now, spend no time waiting for that error again. Next, let’s take a look at the new disable locker fonts emulations. Say, if you define a font family, Rubik, in the fonts-face rule, and set the source to local Rubik Light locally when found, and fall back to the remote Rubik-Black font if it’s not. In the computer pane, you can see the current font is rendered using Rubik Light font locally.

13:00 - During development, you might want to disable the local sources to properly debug and verify your web fonts. Let’s try to do that and refresh the page. See, the font is now Rubik Black loaded from the network resource. DevTools also added a new CSS media emulations to emulate prefers-reduced-data media query. In this example, we have this CSS code to skip downloading custom web fonts if the user turn on data saving mode. Enable the prefers-reduced-data emulations. See, the rendered fonts is now full back to the system font Arial instead. Next, E for the elements panel enhancements. You cannot edit the styles creator with the CSS in JS frameworks and the CSS object model APIs. Here, we got a page with a few eggs. Let’s say these are the CSS in JS code to animate them.

13:59 - Click and run the code, then right-click and inspect one of the eggs. You can now edit the animations in the style panes. This styles were not editable previously. Next, did you spot a new button in the Style pane just now? This little button let you toggle the computer sidebar pane. Now, you can view both styles and computer panes on the same screen. Good news, especially for those big screen users. We added a group chat box in the computer pane as well. This new chat box lets you group the computer properties. Since I’m looking on the egg animations now, this allow me to easily focus on just the animation properties that I care about. If you want to take a screenshot of the egg container, we have a shortcut for you. Click on the egg element and select capture node screenshot. There you have your screenshot taken.

15:00 - Sweet! The focus-visible pseudo-class is a native CSS way to style elements that are in focus and need a visible indicator to show focus. You can now use DevTools to force and test the focus visible state. Next, we have A for accessibility enhancements. The Inspect Mode Tooltips now displays more accessibility information. Hoer to the element. The Inspect Mode ToolTips indicates whether the element has an accessible name, role, and whether it is keyboard-focusable.

15:36 - Use the new emulate vision deficiencies feature to get a better idea of how people with different type of vision deficiencies experience your site. DevTools can emulate blurred visions and four different types of vision deficiencies. For example, protanopia is the inability to perceive red and green color. The test color of both anchovy and delicious are not accessible. But it might still look okay for people with normal vision. Let’s enable protanopia emulations. Notice that the color contrast of both anchovy and delicious are becoming even worse now. The texts are almost unreadable. Go try out other type of vision deficiency emulations to experience yourself. Next, you can fix the low color contrast texts with our color suggestions. Let’s try to fix the color of anchovy. Select the element, open the color picker in the style pane, expand the contrast ratio section. Here we provide AA and AAA color suggestions.

16:47 - Click on the suggested color and see the color is fixed. It might be tedious to go through each element of the page and try to spot all the color contrast issues. DevTool is here to help. In just one click, the CSS overview panel helps you to identify all the low color contrast texts of your page. Here, the report shows two issues of the page. Click on the issue. You can view the list of elements that have the issues. Click to focus on the element and fix it.

17:19 - The final letter, T, represents new tabs and panels in DevTools. Have you ever seen a console for browser’s warnings, and you have no clue on how to fix it? The new issues tab aim to improve that. Open the console. A message will be shown if your site has issues. Click on view issues to open the issues tab. Expand the issue. It provides you more information, effective resources, and guidance on how to fix it.

17:51 - Next, you can now debug the web authentication API with the new WebAuthn tab. Open the WebAuthn Tab, create a virtual authenticator, and watch it play the part of a real device. There’s no need to carry around a bag with different authenticators to debug your implementation anymore. DevTools now support moving tools between the top and the bottom panel. This way, you can view any two tools at once.

18:20 - For example, if you would like to view the elements and the CSS overview panel together, you can right-click on the CSS overview panel and select move to bottom to move it to the bottom. This way, is easier for you to view, focus on an element, and edit it without switching context. Last one, use the new media panel to view and debug media player information. Here, I have a page with a video player. Play the video and open the timeline pane. Try to pause, play, and fast-forward the video. The video playback and buffering status are updated in real time. Together with properties, events, and messages information, this panel could help you identify potential media issues quicker. DevTool have some new improvements on web assembly and secure context debugging as well. Check out the Talks by Ingvar and Camille for that. Phew! That’s all. Remember to try out these great new features.

19:28 - Every six weeks, I will publish the what’s new in DevTools posts and video. Don’t miss the updates. Follow us on Twitter and YouTube. Thanks for watching. See you! I’m going to enjoy my great fried rice now. Oh, do you know we have an engineering blog? Go to this link to find out more. And by the way, my teammates, Tim, Jack, and Paul have a talk about how we upgraded the DevTools Architecture to the Modern Web. Remember to check that out. See you next time. (upbeat music) - Hi, I’m Una Kravets, a developer advocate on the Chrome team, focusing on CSS and dev tools.

20:20 - I’m really excited to chat with you today about what many regard as the future of web styling. CSS Houdini. Houdini is an umbrella term that describes a series of low level browser APIs, which make it easier for authors to access and extend CSS by hooking into the styling and layout process of the browser’s rendering engine. This means that developers now have much more control and power over the styles they write. For example, instead of waiting for a browser to implement an angled borders feature, you can write your own Paint worklet, apply it to both borders and clipping and get an effect just as you would with a rounded border radius. Instead of waiting for a browser to implement Masonry, you could implement a layout worklet to imitate this browser-based implementation or you can use an existing one.

21:09 - And beyond worklets, Houdini enables more semantic CSS for the typed object model and enables developers to define advanced CSS custom properties with syntax, default values and inheritance through the properties and values API. Today, I’m going to be focusing on the CSS Paints API, which is supporting all chromium-based browsers, is partially supported in Safari and is under consideration for Firefox. However, don’t let this native support discourage you because with the CSS Paint Polyfill created by my teammate, Jason Miller, you can still get creative with the Houdini Paints API and see your styles work across all modern browsers. So let’s focus on the CSS Paints API today. This API enables developers to define canvas-like custom painting functions that can be used directly in CSS as backgrounds borders, masks and more.

22:01 - In the angled corners example I just showed, it’s being used as both a mask and a border to draw the border as a border image and mask the element to prevent click interaction effect overflow. There is a whole world of possibilities for how you can use CSS paint in your own user interfaces. To make working with a Paint API and other Houdini worklets a little easier to explore and consume, my team puts together a resource called houdini.how. It’s your all-in-one go-to for CSS Houdini worklets and information. A congregate library and a reference. Let’s check it out. You can learn all about CSS Houdini itself with links to further reading on the about page.

22:41 - Here, we list all the Houdini APIs with some really great links for further reading and if you’re interested in exploring these a little bit more. There’s also this link from is Houdini ready yet? where we have a list of all of the browsers, the W3C Spec, and links to both the spec drafts, recommendations, for Paint API, for example, it is in candidate recommendation state and also the level of support and whether it’s in development, under consideration, whatever’s going on with Houdini, this is a great place to check that out and get that information. On the worklets page, you can browse various Paint worklets and play around with different values to make sure they fit into your brand. So here is just an inspiration page where you can see some of the possibilities here. Using it as a border, using it as a background, you can make some really cute little confetti or you could make it really big, this is just another example of how I can use it.

23:38 - Why have one underlined when you could have many underlined? So this is an example called extra underline. You could change the width, the spread, again the color. There’s an example of reverse border radius where instead of the border radius coming from the inside out, it’s doing the opposite. It’s creating this clip, which I think is cool ‘cause it reminds me of vintage stamps or tickets. So I think that one’s neat, there’s also this powdered gradient, you could change the direction, the color, you could really play around with it.

24:06 - I like it in white because it’s kind of festive. There’s also the sparkles example. You could change the number of sparkles, the hue so you could have sparkles of every color. The height and width of the sparkles, that’s changing the size and the weight. So you can start to really play around with these sparkles. There’s also that angled corners example. So here you could make all these corners different sizes and start to explore what that might look like and change the weight of this as well.

24:32 - There’s the static gradient and I think this is really fun because the more you increase the gradient size, the more you’re sort of pixelating it. And it reminds me sort of a little roaring fire, especially as it’s re-rendering and animating with this randomized function. So I think that these are pretty neat. As I mentioned earlier, despite native implementation, these all work cross-browsers, things that Houdini Paint Polyfill, so have no fear and feel free to get creative with CSS Houdini. And when you found the right worklets that you want to use or expand upon in your own applications, there’s a few ways that you can use it and implement it into your build system. So the usage page here will show you how. To quickly get started with a prototype, we recommend using unpkg.

25:16 - So just calling it the paint worklet directly and registering it to your application, but when you’re using it in production environments you’ll likely want to manage the worklet in your file system. Here, we have just in a CodePen this example with the confetti worklet. This is called extra confetti. What I’m doing is I’m directly adding it to my system with CSS.paintwork.admodule. And I’m getting this CDN link. I’m also including the Polyfill script and I’m using unpkg to manage that as well. This is just a really quick way to get started.

25:49 - And the way that you get that link is if you go to the worklets page, this right here, the CDN link, you could just copy link address and there you’ll have it. So let’s go back to this example, from there we can then update the CSS. So here we have a few of the values. They will have fallbacks and defaults, but you can update them to make sure that they look like you want them to and you could really start to play around here. Now, another thing that you can do is animate this stuff. So here is just an example of a keyframe animation in CSS.

26:20 - That’s just going to change the number of confettis from 60 to 65. And this will enable us to re-render this and create this sort of animated effect. So here, let me just quickly create an animation. We’re going to call confetti, let’s say 0.5 seconds. Let’s say infinite and linear. And then once I do that, this will now start to animate and we’ll see just the sort of playful animation of confetti just kind of playing around on this page.

26:46 - If I wanted to change this worklet, again I could just go and copy the CDM link. If I wanted to change it to sparkles, I could just copy that link address, bring it back here and then create this sparkle effect instead of the confetti effect. Now when it comes to working in more production environments, you’ll likely want to manage these worklets yourself. And in this case, you can install them by npm. So if I check out this worklet page, let’s see what worklets we want to use.

27:14 - This static gradient worklet, I think it’s pretty cool, especially as I mentioned, in these wider sizes. So I’m just going to copy the name here of the npm worklet, which is the name that we have listed and go to VS Code, where I have the create react app, a baseline sort of just loaded up here to play with. And now I’m going to npm install this worklet as well as the CSS Paint Polyfill. And I want to make sure that I am saving it to my local dependencies. So now npm is going to download this and register it, add it to the package.json.

27:48 - And from there, we’ll be able to reference Houdini static gradient and CSS Paint Polyfill. We’re also going to be using a file loader for this, this way we’ll be able to separate the worklet file and keep it individual and not mixed in with the rest of the JavaScript. So if I check the package.json, we can now see that the CSS Paint Polyfill and Houdini static gradient have loaded. I can go to my App.js and reference them. So let’s first import the CSS Paint Polyfill, it auto-fills because it’s reading it. And then let’s import this worklet URL as a variable.

28:20 - So let’s do import worklet URL from, and here I use the file loader and I am calling Houdini static gradient. Now we’ll be able to reference the worklet URL when we register the worklet. So I’ll do CSS.paintworklet.admodule. And here I can call worklet URl So we are now registering the worklet into our file and our application, which is great, but we’re not going to see anything until we first start the app. But secondly, we have to go into the CSS and add it in our CSS. So we have this app started, it’s loading here for the first time.

28:59 - So it takes a second and there is the react app, up on the page. So let’s go into CSS and let’s change this app header from just this blue background color, which we see here, to having a cool static gradient background. So let’s adjust this file here and we’ll add the paint and we’ll call static gradient and let’s do a static gradient on top of the color. This is no longer a background color, it’s a backgrounds and let’s also change the static gradient size. So this is a custom property, static gradient size. I liked at 10, I thought it looked cool. So I’m going to hit Save now.

29:43 - I’m going to open it here and there we see that we have this static gradient background now being added to our UI, which is neat, but this is also really loud. So we can open up dev tools and start playing around with this. There was another variable called static gradient color. Let’s just set it to red, really quickly to debug it. So now we have this all, box that pops up and here you can use all of these neat dev tools to find the right color and effect. I think something a little bit more subtle like this could be nice. I’m just going to copy this hex code, open up my VS Code again, and I am just going to add that variable. So static gradient size, let’s update this to color and just paste the hex. So now when I open this, we see this static gradient background. I’ll just refresh the page to show you that this is going to render in the custom properties that we updated.

30:41 - So I also wanted to quickly show you that this will work in Firefox. Here I have Firefox pulled up, and in Safari. So I’m just going to open this up in Safari as well and drag this into view. And here you could see that we have this CSS Houdini Paint worklet effect working across browsers with the CSS Paint Polyfill doing a lot of the heavy lifting. This is also a community driven effort so we’re looking to add your contributions to our worklet library too.

31:11 - If you have a Houdini worklet that you’ve built or you want to play around with Houdini, this is a great opportunity to do that. Please feel free to submit a pull request and we’ll be sure to get it on our page. The contributing instructions are in the GitHub and also linked below. And worklets aren’t the only thing you can submit. You can also submit a resource. So here on the resources page, you’ll find a variety of different resources like articles, editor drafts, web docs and tutorials.

31:40 - There’s some really great examples and demo sites. So if you find a great article or write one yourself, feel free to submit it. We’d love to have those as well. Thanks for watching this video on CSS Houdini and the CSS Paint API. It truly does open up a world of possibilities and hopefully Houdini.how can help you discover some of those possibilities.

32:03 - If you’ve been inspired by something you saw in this talk, please let us know and contribute your own ideas to Houdini.how, whether they’re worklets that you’ve built or great resources that you’ve come across, I would love to see them. Thanks again, I’m Una, tuning out. (upbeat music) - Hello, my name is Ingvar Stepanyan, and I am a WebAssembly Developer Advocate at Google. Today we are going to talk about some WebAssembly debugging, and look at some of the improvements that Emscripten and Chrome DevTools have made in this area for C++ applications. First of all, let’s take a look at the basic debugging experience.

32:45 - For this example, let me create a C library for calculating Fibonacci numbers. So I’m defining an export function fib. It accepts a single parameter, an index that we should want to get, the Fibonacci number. And the body itself is a fairly straightforward implementation of the algorithm, where each next number is the sum of the previous two. On the HTML side, I’m implementing the JavaScript generated by Emscripten, initializing it, and once it’s done, I’m calling the expert fib function with some sample index. Let’s say 10. Now we need to compose this module. So I’m invoking MCC with fibonacci.c, the filename, and then passing some parameters exchange, which generates an ES module.

33:22 - And I’m building it with optimizations enabled. Now when I to go to Chrome, I can see the result in the console as I would expect, and I can go to the sources to inspect. When it do that on the left, I can see the generated JavaScript and WebAssembly files. And I can set a breakpoint at my entry point. And when I do that it will stop there, and from there, I can step into the Emscripten generic JavaScript.

33:47 - When it’s taken again, I can jump right away from JavaScript into the generated WebAssembly module. Now, what I see here is a raw disassembly view of the WebAssembly module. It looks quite scary, but luckily it’s not something that most developers will need to deal with, as it’s the most basic debugging experience that you can get without any help from Compiler. Still you can see that DevTools has helpfully generated a function name for us. That is showing up in Call Stack and to the Assembly. It based it on the expert name.

34:15 - So that even when you don’t have any debugging information attached, you still at least get readable stack traces. We can also see the style parameter, you’ve got [inaudible] and we can even scroll to the Scope View and see the value of 10 we passed from the JavaScript site. in this view, I can also step up as individual instructions. I can enter our main loop. I can set a break point to say, run a sample a couple of times. And once I do that, I can see how the var0, our counter, gets decremented while var1 and var2 look like consequent Fibonacci numbers.

34:48 - And then I also have var3, which to be honest, just have no idea what it is as it is something not generated by the compiler. So while this works and provides at least some debugging experience, it’s also not a great one as it takes quite a bit of guesswork to understand what all these instructions mean and what all these variables are and how it matches to our original code. Even on such a tiny example, on logic opposites it becomes an even bigger problem. In fact, let me demonstrate a slightly more complicated example. In this one I’m going to compile a C++ application for drawing a Mandelbrot fractal.

35:23 - You can see how this application’s still fairly small. It’s a single file, contains 50 lines of code, but this time I’m also using some external libraries like STL for graphics, as well as complex numbers from the C++ startup library. So I am initializing the STL. And after that, I am generating a plot with some random colors. I don’t care what they are for the purpose of the demo. And then I’m doing a bunch of calculations on complex numbers, as per Mandelbrot formulas to determine a set of color of each pixel on the screen.

35:54 - Finally, once it’s all over, I’m drawing everything to the canvas. Now we’re going to compile it very similarly to the Fibonacci example. Except I’m allowing it to use as much memory as it needs and passing some parameters to link it with the STL library and generate a default HTML instead of the custom one. Once it’s compiled, I can open the results in Chrome and see our beautiful fractal shape with some random colors. Once again, I can open DevTools and see the generating JavaScript and WebAssembly in the sources panel.

36:26 - Except this time when I open the WebAssembly file it looks a lot, lot larger and quite incomprehensible. So I can still search for the main function and I can recognize some imports, and I can set a break point inside of the main function and whether it loads a page, it will get hit. And from there, once again, I can step into the main code, but at this point, I just have no idea what all those instructions are and what all those variables mean as it has zero resemblance with my original code. What we would want instead is to sample part of the original C++ code we wrote, and are actually familiar with. And turns out we can do just that. First of all, we need to go to the main provider on the screen and installed this Chrome extension.

37:12 - It’s been developed by the Chrome DevTools team to support debug information provided by Emscripten. Once it is installed, let’s compile some module by saving optimizations and enabling debugging by this [inaudible] instead, I’m also adding one extra flag to specify why DevTools should be looking for sources. In this case, in the one directory from this folder, we compile to it. Now let me compile it. This time, when I open DevTools, in addition to the usual JavaScript and WebAssembly, I can see that it locates and finds my original C++ code as well. I can not only see it, but I can also set a break point inside of it.

37:50 - And when that reloads the page, it will stop right there instead of the raw disassembly. Moreover, if I look in the scope view, then instead of some auto-generated names, I can now see the original C++ variables as well with their corresponding types. When stepping, I’m also no longer stepping through individual instructions, but rather over familiar source level C++ expressions. For those the only variables that are initialized, width and height, but let me skip, for example, pause (inaudible) generation. Once I do that, I expand up a letter E, as well as expands the nested structures and look at the colors it generated and verify that they look okay and random enough.

38:31 - From there I can step over for example, center initialization, and I can observe the real and imaginary parts of the complex number. - I can also step into the loop and once I’m inside of it, I can see the X and Y variables as well. So let me step a few more times. I have generated a point, I can verify that it’s generate zero zero. Let it run few more times. The same X is three and the point real part should be non-zero. When I expanded it, yep, I can verify that.

39:08 - Similarly, I can step through a few more variables and let me just skip to the part where it picks a color. At this point, we can get it from the palette. I stepped through that and now I can expand the color and check which RBG values it picked for this pixel. I believe that this provides much more natural debugging experience. We source level break point’s steps and value from that is that you can already use today for your apps.

39:36 - One thing that is worth noting is that for larger apps the debugging information can be much larger than the WebAssembly code and data itself. It might be desirable to split it out so that it doesn’t get doubled for non debugging sessions. For our Mandelbrot example, we can check the file size of the generic WebAssembly and see that it’s around 670 kilobytes. Now we can recompile it just like a bow but with one extra option. There’s G separate dwarf, which allows us to specify a file name, but Emscripten should split out and put the debug build.

40:12 - When we do that and check the boundary size again, we can see that we go down by 25, almost 30 kilobytes. This doesn’t look like much because we only have a single 50 line source file. But as your application grows, situation quickly gets flipped and the debugging information can be actually taking most of the space in some binary. In fact, Google Earth is successfully using the new debugging experience combined with this option to debug WebAssembly builds of their large C++ application on the web. You can see how they can all set break points in the source of the click handler, and they can trigger them from the web page, steps for the C++ expressions, and of some of the variables, just like we did in our trivial demo.

40:55 - All in all, this has some amazing improvements that will unlock even more applications and allow them to bring their experiences to even more users across a shared cross-platform web. Still, it’s not the end of the journey. There are still more features to be at work here. Just to name a few, we’ll be adding raw memory inspector for raw memory view, custom formatters for C++ types, working on improved profiling code coverage support, and many more. Thank you for listening and please stay tuned for the future updates. - Welcome to the Code Review. Glad to be here. There’s also been some really, kind of things pushing the boundaries recently. So let’s get started talking about that.

41:45 - - And just before we continue, I’m just going to move this because it turns out I can’t position cameras very well for our presentations. - So let’s dive right in. - We also have Paul Lewis joining us. (upbeat music) - We will visit the browser lands where, over deep within interface valley, a new process has been created. (upbeat music) - So yeah, thank you very much and goodbye. - Bye. (giggling) - What’s wrong? - I’m not sure how we turn off. (chuckles) (mumbling) - So stop streaming. - Okay, goodbye for real. - Goodbye. (beep) - It is time for artechulate.

42:45 - Gentlemen, are you ready? (sighing) - Already sweating. - I’ll take that as a yes. - I’m already in game mode. I’m just concentrating. - All right, get the game faces on. On your marks, get set, explain. - Okay. Oh, oh, server, this is, if you did brain surgery on something to… and then the second word would be a kind of a bird. A kind of bird at night, it flies at night and it’s known for rotating its head all the way round. - An owl. - Right, that’s the second word, the first one would be brain surgery to kind of… - Oh, the lobotomized owl. - Yes. Okay. The standards body? - W3C. - Correct. Okay.

So really old programming language 43:32 - from like 1950s, 1960s. It’s the kind of thing you think of mainframes. - COBOL. - Correct. This one would be an old way of connecting a monitor to a PC, for example. - Oh, VGA? - Slightly newer. - DVI. - Right, now expand it to what you’d actually call it. It is a DVI what? - Plug, socket, cable. - Keep going. Yes, there you go, DVI cable. - DVI cable. - Surma.

44:00 - Right, an old way of archiving that you’d have done on Windows. - A WinRAR. - Yes. Brilliant, love it. Okay, yeah, right. The standards that you write all your mark up in. Yeah? But then, add a letter on the front, that’s, again, a little bit on the older end. - The standard that I write all my mark up in? - So what do you write your mark up in? What’d you write your mark up in? - VS code. - No, as in, - Text. - What’s the, yeah, but what’s it actually known as, the standard for writing? You’ve got JavaScript, CSS and? JavaScript, CSS and what? - JavaScript, CSS and HTML.

44:38 - - Right, put another the letter on the front. - XHTML. - Another one, keep going. It makes it exciting and moving and… - Oh, DHTML, DHTML. - Perfect, perfect. Right. Now, you know when you do a thing and thing and thing but that’s really annoying, so there’s a new operator that lets you just check. - Oh, no. (chuckle) - It’s a question, Mark. - It’s the Nullish coalescing operator. - Yes. Brilliant. Okay. Apple have a connector of this type with… - Lightning. - Keep going, another one. - Oh, Thunderbolt. Perfect. Okay. Going back to the standards body, there’s a different layers of readiness for specs… - Edison’s draft. - Perfect, that’s it, that’s it. - Oh, excellent. - Yeah. All right, the one that’s up from UNO is? - Duo. - Brilliant and, yes. - Well done. Well done. - Oh.

45:44 - - You did very, very, very well with those. - There were some lucky stabs in the dark there, like with COBOL and whatever. I was expecting to have to list like a ton of all programming languages but yeah. - I think DHTML was surprisingly difficult to land. - Yeah, I was thinking about SHTML and all of that, so I don’t, yeah. - But you crushed it with… - I did enjoy your dancing around with the lobotomized owl. That was very enjoyable to watch. - I’ve never heard of it. What is that? I need to go and, do I need to Google that? I’m not sure. - It’s a CSS selector, +. It’s called the lobotomized owl. - Today, I learned. (beep) (upbeat music) - Hi folks we’re here today to talk about transitioning to modern JavaScript, in order to get better value performance out of every bite. I’m Houssein. - And I’m Jason, Houssein, are you ready? - Ready for what? - Ready to play Is It Modern, the show where I quiz contestants on whether a piece of JavaScript is modern or not modern. Hussein, you’re our lucky contestant today, let’s begin.

47:00 - For question one, we have these two lines of code. What do you think Houssein, modern or not modern? - I, see months is declared using var and I’m pretty sure indexOf was introduced in ES5, so this is not modern. - Correct, there’s no inherently new syntax being used here, so it’s not modern, on to question two. Another two lines of code, is this modern? - Along the same lines, object.assign, isn’t a relatively new syntax or anything.

47:27 - So again, I’m going to go with not modern. - You’re right, not modern, all right, last question. This piece of code is a little larger and I promise, it’s not a trick question. What do you think, modern or not modern? - Taking a look here, I don’t think there’s anything modern regarding the promise syntax, but I do see a variable being declared with const and constant let wasn’t introduced until much later, so this has to be modern code. - [Man] Oh, tricky block scope, let and const are actually supported in Internet Explorer 11.

47:57 - There’s a few bugs to keep in mind, but we can run this code. - Wow, okay, I guess I learned something new today. Are you really going to be doing this the whole time, Jason? (laughing) - No. - But that kind of begs the question. What exactly do we mean when we say modern code? Well, for starters, modern JavaScript is not ES2015 syntax or ES2017, or even ES2020. it’s code written syntax that is supported in all modern browsers.

48:28 - Right now, Chrome, Edge, Firefox and Safari, make up 90% of the browser market. And then another 5% of usage comes from browsers based on the same rendering engines, which support roughly the same features. That means 95% of visitors to your site are using a fast, modern browser. The browsers that make up majority market share, are also Evergreen, which means that they get new JavaScript features over time. But how do we write or generate code for a moving target? The easiest way is to look to the features that are already widely supported.

49:06 - First up classes, which have over 95% browser support. Arrow functions, 96%, generators also have 96% browser support. - [Man] Probably the most underused JavaScript feature, in particular, this six line generator implements a lazy binary search over the DOM. - Yeah, to be honest, I can’t think of ever having to write a generator by hand. - This might have been my first. - [Man 2] Block scoped, constant let declarations, save us from our hosting issues and have 95% browser support.

49:38 - And like we mentioned earlier, this is actually partially supported in some older browsers. So if we’re careful, we can almost call this 97% support. De-structuring has 94% support rest parameters and rest spread, also 94% support. Object shorthand, which was easy to forget that this wasn’t in the language before ES2015, now has 95% browser support. And finally async await, which even though it was an ES2017 2017 feature, has 95% browser support. - This is easily my favorite feature of the language. - Oh really, not non generators? We’re using a term browser support quite often, and it’s worth clarifying what that means. You can think of browser support as a percentage of global web traffic from browsers that support a given feature. To get a full picture of modern JavaScript browser support, we can take the lowest common denominator of the features we just saw, and we can see that all these features are supported in 94% of browser traffic. Now keep in mind, this is even higher for newer sites and apps.

50:47 - As an example, the total here would be 97% for visitors only from the US. - Yeah, so while it’s actually pretty useful to have a rough idea of the browser support for language features, you’re going to use, most of us aren’t writing code that gets delivered totally unmodified, to run in browsers. We rely pretty heavily on transpilation. Say I wanted to have a function that returns promises resolving to the number 42. I might write a little async arrow function like this one. In order to have that code run in the last 5% of browsers that don’t support async arrow functions, I might transpile it to something like this, or at least I might try.

51:24 - In reality, most current tools are going to take my 21 bytes of source code and transpile it to something like 583 bites of source code. Plus a runtime library that the generated code depends on, which actually brings us up another six and a half kilobytes, to 7,000 bytes. Obviously the transpiled code will take longer to load than the original version, it’s larger, but the dramatically larger compiled code also runs significantly slower once downloaded. JavaScript code gets compiled to instructions that are executed by a virtual machine. And we can count those to estimate how much work is required to run a given program.

51:58 - So our original async function compiles to 62 instructions. Whereas the transpiled output compiles to over 1100. We can also benchmark these side by side and the transparent version executes more than six times slower. And this size increase is actually relatively consistent across modern features. Pulling all of those earlier syntax examples into a single module, is about 780 bytes when minified using terser, to remove white space.

52:24 - If we then transpiled that, the generated code is six kilobytes, that’s seven more and the only benefit we got was that it could run in Internet Explorer 11, if we add another 10 kilobytes of polyfills, that it depends on. Theoretically, we also support opera mini’s extreme rendering mode. Although in reality, most transpiled apps still won’t work in that mode, because of other limitations. Now we know that the code in the left here works in 95% of browsers, and a lot of us are already writing modern code like this. So it’s tempting to say, well, that’s fine. I’ll just ship that modern code that I wrote, to browsers. - Well, you can think that you can change all the code in your application, but if we actually take a step back and look at what makes up our website code, we’ll find that the majority of our code base comes from third party dependencies. Daya from ACB archive, shows that half of all websites ship almost 90% more third-party code, than first party. - Right, so I did some really rough napkin calculations, which is always dangerous. But working back from global web traffic, we can estimate that the overhead of shipping legacy JavaScript, accounts for around 80 petabytes per month of internet bandwidth.

53:38 - That extra bandwidth shipping unnecessarily poly-filled, and transpiled code, produces something like 54,000 metric tons of carbon dioxide into the atmosphere. We’d have to plant 30 million trees, to offset that much carbon dioxide. These are obviously super approximate numbers, but they kind of help paint a picture of the scale of the problem. - Yeah, and a big part of that scale comes from how prevailing this issue is on NPM. If we take a look at a top thousand front end modules on the MPM registry, the median syntax version is ES5. The average is also ES5.

54:13 - In fact, less than 25% contain any syntax newer than ES5. only 11% of modules use the browser field. And 90% of these point to ES5. 2% of modules have a JS next main field, and all but one are ES5. And only 9% of modules use the module field. - So why is this? A big part of the reason is that, package offers can’t rely on application bundlers to transpile dependencies, to ensure browser support.

54:44 - We estimate that only half of built tools transpile dependencies at all, which means that modern code published NPM gets bundled as is, by the remaining tools. And that unexpectedly breaks browser support for those users. Thing is, package authors came by this honestly. As modern JavaScript got popular, packages still published in ES5, because it could be hand tuned, where general purpose transpilers have to be spec compliant, so they don’t break valid code, package authors could transpile to more efficient output by making assumptions specific to their source. Maybe I’m using classes, but I only use the bits that transpile to simple functions and prototypal inheritance.

55:22 - As we found ourselves using more and more modern JavaScript syntax over time, those possibilities, for lossy transpilstion, faded away. Thankfully, this is now a solvable problem. Historically NPM packages declared a main field, pointing to some common JS code, which as we know, is generally assumed to be ES5. Recently node, and a number of bundlers, have standardized a new field called exports. It’s great, does a lot of things, but it has one very important attribute, which is that it’s ignored by older versions of node.

55:56 - This means that modules referenced by the exports field imply a node version of at least 12.8, and node 12.8 supports ES2019. That means that we have to assume, modules referenced by the exports field are modern JavaScript. Going forward, there’s at least two types of NPM packages I expect to see. We have modern only, where there’s just an exports field and that implies an ES2019 package, and then packages with both exports and main fields, where main provides an ES5 and common JS fallback, for older environments. What does this all mean? The bottom line is that soon, if you don’t transpile package exports, there’s a high likelihood that you’ll ship modern code by accident. - And maybe shipping modern code is okay.

56:44 - The key is, that you define a version of modern that strikes the right balance between JS features and browser support. Our research has shown that EAS2017, is a sweet spot here, since it has 95% browser support, offers huge improvements over ES5, and it’s still really close to what we all think of as modern Syntax. - Yeah, we’re not saying that you should only write ES2017, it’s much the opposite. ES2017 is a great transpile target. Transpiling the most recent ES2020 syntax features to ES2017, is generally extremely cost-effective. Transpiled outputs, like this for await loop, are the kind of thing we’re looking for.

57:25 - The overhead incurred here is only four bytes. - But maybe you do need to support Internet Explorer 11. Or Opera Mini’s extreme mode. Thankfully, there’s a really solid way to support older browsers without impacting newer ones. First generate ES2017 bundles for your application and serve those to modern browsers, using script type module. Then generate polyfill ES5 bundles and serve those to legacy browsers using script nomodule.

57:56 - There’s no expensive server setup or user agent sniffing required, and it lines up really nicely with our ES2017 sweet spot. There are two ways to generate these two sets of JavaScript files. The first, is to compile your application twice. And the second is to compile your application for modern, and then transpile the output. With the first technique, we want to build the application, as if we were only going to support modern browsers. Each source module gets transpiled to ES2017, by something like Babel or TypeScript, then code is bundled. Then we run a second full build of the application, but this time with the transpiler configured to convert modules to ES5 add polyfills. The result is independent sets of JavaScript bundles that we can load modern and legacy browsers. The second approach flips things around a bit. We run a single build to generate ES2017 bundles for modern browsers, then we transpile those bundles down to ES5, with polyfills.

58:58 - One great advantage of this approach, is that all code, including any dependencies, can be ES2017. Since the whole build assumes modern code, all code can be modern. It’ll all be transpired when generating the legacy versions of bundles. Now, this is all really important because IE 11 is a tax we and our users don’t have to pay. Consider the cost, as you think about your site’s performance.

59:27 - If you need to support IE 11, be careful not to degrade the experience for the majority of visitors, in order to get there. We hope we’ve made the case for why modern JavaScript is so important. We also want to make this as easy as possible to adopt. So we’ve published a how to article, that provides configurations for popular build tools to get you started. - Yeah, and depending on your setup, this can be as easy as installing a plugin.

59:54 - If you’re curious, what kind of impact turning on modern JavaScript would have on your website, today we’re launching a tool for that. Load the tool, enter the URL of a page and it will analyze the JavaScript of that page, in order to calculate how much smaller and faster that page would be, if it leveraged modern code. Go check it out, let us know what you think. We’re also working on something similar for NPM packages and any feedback for this version, helps make that one better too. - Thank you, and we hope you enjoyed listening, as much as we enjoyed giving this presentation. - Thanks for watching. (upbeat music) - Hi everyone. I’m PJ.

00:42 - I’m a product manager on the Chrome Web Platform team. This session is on next level web applications. And while the capabilities I’m going to describe are most applicable to desktop screens, let me emphasize that powerful web apps work everywhere, on phones and on TVs too. I love the phrase “one web, many screens.” You can build powerful applications for any device using web technologies. With a good responsive layout, you could show the same code base amongst all of them.

01:11 - Compare that to the alternative, a minimum of three, if not five or more code bases, and it’s clear why we see more and more developers are choosing the web for their applications. To understand how transformational this moment is for the web, let’s start with a little internet history. Gmail’s a market leading email client, but when it launched in 2004, there were many incumbent products. There were already dozens of popular clients, and virtually all internet users had an email account and a client. If you’ve worked at a startup, you know that a new product typically has to be at least 10 times better than the incumbents to win market leader status.

01:48 - So what was Gmail’s secret sauce? Well, what made Gmail different was being a web-based client which meant reach and universal access. There was no software to install. All users needed was a web browser, and by 2004, most computers had one. This made user acquisition trivial. All anyone had to do was have a link. There was one more requirement, though, and it was a critical web capability that started landing in browsers just a few years before Gmail launched. For those of you who have been developing web software for a while, you might remember this new capability called Ajax. Ajax transformed the web as an applications platform.

02:29 - Before Ajax, server requests could only take place with a page refresh, so client interactivity without reloading the page was basically impossible. Imagine you had to hit refresh every time you wanted to check for a new email. Gmail would never have succeeded. Ajax was transformational. And in addition to enabling Gmail, within a couple of years, we started to see applications like Rightly that formed the foundations of G Suite. I think we’re now on the cusp of another transformational moment. I broke up these APIs into three rough categories: environment access, application flow control and engagement.

03:06 - With these APIs, web apps can load and save files from the user’s file system trivially used tabs in standalone windows, developers can automate the focus of the user flow between standalone windows tabs and the browser. They can also place windows across multiple screens and give users the option of running the software on application start-up. Finally, with notification triggers, we unlock new types of applications such as alarm clocks and calendars that need robust assurances about the timing of when notifications are displayed. Quick word of caution. The timelines given here in this table are rough estimates. You could always get the latest info on the Fugu API Tracker at the URL in the center right of the slide.

03:51 - Before we get into the specifics of the APIs, let’s hear an example of what’s now possible on the web. - Hi, I’m Alex, co-founder and CEO of Clipchamp. Clipchamp is the world’s first and only completely in-browser video editing platform. We’ve taken what’s arguably one of the most complex and complicated application categories and got it to work purely based on web technologies, including a computationally very intense video export process. Google Chrome’s always been our platform of choice.

04:21 - One of the features that we’ve adopted very early was progressive web apps. And we can already see now that users who have installed our PWA retained three times as well as the average user. Some of the new features that are gonna be landing to Chrome and progressive web apps, such as Native File System and Font Access, will make the app indistinguishable from any other app that they might have installed on another operating system. We’re incredibly excited to see these features in the wild and our users adopting them at scale. - Thanks so much, Alex and Clipchamp team.

04:56 - For all the developers out there watching this, you might want to stand in front of a mirror and say completely in-browser video editing platform three times to fully believe it. Let’s get into more details on these APIs. Many of them aren’t in stable yet, so you might need Canary or another Dev channel version of Chrome, and you might need to activate some flags. I’ve specified the flags needed where appropriate. Let’s say that you want to create an application that opens CSV files. Your manifest would look something like this. We’re declaring in this section that when a user double clicks on a MIME type text CSV file with a dot CSV extension, you can see the accept member there, the action is handled by the apps slash open CSV route. Now here’s an example of receiving a file handle. To access launch files, the site should specify consumer for a launch queue object attached to the window. Launches are queued until they’re handled by this consumer which is invoked exactly once for each launch.

05:57 - With this approach, you can ensure every launch is handled. Your application can then choose to handle these files however you’d like. We’ve got so much to cover today, I better speed things up. So it’s time for a double whammy. These two capabilities make sense to run through together because display override will help us with tabbed application mode. First, I’m gonna show you what tabbed application mode looks like. Hey, look at that. Easy tabs in a standalone window.

06:26 - This is groundbreaking for many productivity apps where it’s common to have more than one file open at the same but you want those all associated inside of the same window. Here’s how you use the new display override member in a manifest file. Notice that we start with the widely supported display mode standalone. The display override member is ignored by browsers that don’t have support for it. So the app can continue to fall back to display standalone on unsupported browsers.

06:53 - Display override takes a list of display modes which are tried in order falling back to the next one each time if it’s not available. You might be wondering what link capturing and URL handlers are all about. Link capturing and URL handlers lets you control what happens to the user journey once your app is installed and in standalone or tab browsing mode. For example, should clicks from the browser that are inside of the scope of your web application open in the browser tab or in an existing standalone window if one exists? Maybe they should open a new standalone window or maybe they should open a tab in a tab window. All of these are possible. And unfortunately, we’ll only have time to scratch the surface on what this API can do.

07:35 - Here’s a sample manifest that has link capturing in it. The capture links member is specified in this instance as existing client event. That is, when a link is clicked leading into this PWA scope, the user agent finds an existing PWA window and, if more than one exists, the user agent chooses arbitrarily, and it fires a launch event on that window’s top level context containing the launched URL. We can see in the defined URL handlers that you can see how origins are specified for handling. In this case with a wild card, the permission for the app to handle URLs for these origins is verified by the browser through a web app origin association file hosted at each handled origin.

08:19 - The window placement API enables complex productivity apps that require multiple window layouts across multiple screens. For example, apps in presentation, finance, medical, creativity, or gaming apps. There’s a lot to this API, far more than we can deep dive into today, but I’m going to show you a couple of highlights. The highlighted code here returns metadata, including how many screens the user has connected and the coordinate system of those screens. This snippet opens a new window on the device’s internal or primary screen.

08:51 - Here’s how you’d request full screen on an external screen, like a TV or projector. This is how you know if the user’s screen setup has changed. For example, if a new monitor was plugged in or one was removed. I love demoing this feature because it’s so easy to use. You’ll wanna be using Canary and enable the appropriate flag. One way to use this is to simply let users opt into the experience with the install prompt. We’re still working on the exact user experience for this, so some of the UI may change before it hits stable. The other way to use this API is programmatically. And you can specify the mode as being either windowed or minimized. For example, some types of applications might want to start in a minimized state. Let’s take a chat app, for example.

09:38 - Notification triggers allow notifications to be shown when certain conditions are being met. The trigger can be time-based, location-based or otherwise. For today, we’re going to focus on time-based triggers. You might be asking yourself, why not just use regular notifications for this? Well, the reason is that the push API is not reliable for trigger notifications that need to go off at a very specific time. So for applications like calendars, alarm clocks and so on, you need something that has higher accuracy and timeliness. Just a quick note.

10:07 - This API is currently in origin trial, and we’re welcoming developers to try it out and give us feedback. The APIs I showed in this presentation aren’t the only ones to get excited about. Check out the Digital Goods API, local fonts, the badging APIs if you want to know about a few more exciting capabilities coming to the web platform. Whew, well, that was a whirlwind. Thanks so much for joining, and please keep those questions coming on Twitter. (upbeat music) - Hi everyone, my name is Asami.

10:48 - I’m a software engineer in the Chrome browser team. Today, I am going to talk about the new logic to detect PWA offline support. PWA is a technology to turn a web application into a native app-like stuff so that we can take advantage of both web and native app features. Because PWA offers native app-like experience for website, websites that support PWA can be installed as if they were native apps. When you visit an offline-capable PWA, the browser asks if you want to install it.

11:24 - If your answer is yes, the icon for the PWA is added to the app arranger on desktop and to home screen on mobile. Once installed, PWA can and should work even without internet connection. Let’s see a little bit more about the flow of how to install a PWA. If you want to install a PWA on desktop, first, you visit a PWA-capable website and click plus in circle button in Omnibox. There, you can see the install pop-up. You can install a PWA by clicking the Install button.

12:05 - Now you can see the icon in Chrome|Chrome//apps in the browser. When you install a PWA on mobile, the icon will appear on home screen like this. However, when you open an installed PWA without an internet connection, you sometimes see the screen, that dinosaur page telling us no internet connection. This is so frustrating and definitely not an app-like user experience, right? This is a problem we are currently improving. We are hoping all PWAs work correctly without an internet connection.

12:47 - We encourage web developers to make their PWAs work offline by handling the offline situation correctly. So we are raising the bar for website to display the PWA install prompt. In short, If your PWA doesn’t work offline, Chrome will not show the prompts for installation starting from the near future. That change will affect both mobile and desktop. Before we dive into how to check offline capability of our site, let me quickly review four requirements to make PWA installable. You need a manifest.

json 13:28 - with the correct field fielding, a secure domain, icons, and your service worker to allow the app to work offline. When the site satisfy all of the requirements, install buttons will show up. On mobile, you can see an install button in the three-dot menu and Add to Home Screen button at the bottom of the page when the site satisfies our requirements. On desktop, you can see a Plus-in-Circle button in Omnibox and an Install button in the three-dot menu when the site satisfies our requirements. Service worker is a thing that is responsible for offline support. Here is how it works.

14:13 - Service worker can install a fetch event handler to steer or interrupt a HTTP request from the PWA before they get to the network. Using the feature, service worker can and is expected to handle all HTTP requests by itself. When you are offline, a fetch event handler is expected to return resources on behalf of actual website using cached resources, as if an internet connection is available. The behavior should be implemented by developers. If it’s not implemented, the install prompts wouldn’t appear, but the problem is the logic to detect if the site works offline is not very accurate.

15:01 - Consequently, install prompts may appear even if the site doesn’t actually support offline correctly. This is a reason why you sometimes see the dinosaur page in an installed PWA. The current logic to detect PWA offline support is just checking the existence of a fetch event handler in Service Worker. I mean, even an empty event handler like this is considered an offline support, but what happens if a fetch event handler is empty? If a fetch event handler is empty, it does nothing. So HTTP request fail. It leads to the dinosaur page. This is a problem that I’m working on right now.

15:46 - Concretely speaking, Chrome will stop considering the empty event handler as an offline support. We are updating the offline capability detection logic as follows. In the new logic, Chrome will create a fake HTTP request and send it to a PWA mains page, which is specified as start URL in manifest.json to see if the PWA respond with a 200 response. During the test, PWA runs in our sandbox that doesn’t have an internet connection.

16:24 - A PWA is considered offline capable if and only if it returns out 200 response for the fake request. This is a naive and the simplest example in service worker that can be considered as offline capable. The fetch event handler returns a response object when fetch request fails. So users can see Hello Offline page text when they access an installed PWA offline. This is another example in service worker that can be considered as offline capable.

16:58 - This code use cached thread to store resources when users install our service worker online. Once users install a service worker and store resources to cache storage, the fetch event handler can return the resources without internet connection. This is yet another example with Workbox. Workbox is a JavaScript library to make it easy to support offline pages. First, you need to supply an array of resources to the precache and route method if you do caching as soon as the service worker is activated. Second, you can use a navigation route method to return a specific response for all navigation requests.

17:41 - Using Workbox is easy and practical, so we recommend using it for your site. Here is a timeline we are currently planning. The new offline detection logic is not available yet, but most implementation was done in the current stable Chrome version, 87. In the next milestone, Chrome 88, the feature is enabled behind a flag. You can enable a check-offline-capability flag in Chrome://flags.

18:14 - Also, we are going to run experiment for some Dev and Canary users to make sure any regression doesn’t happen. We are planning to roll out the feature in Chrome 89 with warning mode. The warning mode means that new check happens without enforcement, even show a running message in the developer console if the site doesn’t pass a new offline capability detection logic. After a grace period, the new requirement will be enforced to all sites. We are very excited to ship the new offline detection logic, and we believe that it will lead to a better user experience.

18:58 - We welcome your feedback, so please input what you think about the new offline capability check. Thank you. (upbeat music) - Chrome 86 introduced changes to the Trusted Web Activity quality criteria. The Play billing API will soon be available to PWAs with a Trusted Web Activity. And PWAs can now be uploaded to the Play Store on ChromeOS. I’m André Bandarra. - And I’m Adriana Jara. And let us show you how to prepare your app and take advantage of the new features.

19:43 - - We recently launched Bubble Wrap, a command link tool that helps web developers to use their PWA in an Android app. We continue to improve the tool, and we’re happy to say that external dependencies are now downloaded automatically when running the tool for the first time. No more messing around with downloading files, unzipping, or typing in paths. The app creation flow was also improved, with more helpful description of the input fields and better validation. A frequent request from developers of game and media applications was to be able to start apps in immersive mode and lock them to a specific orientation from the home screen, and Bubble Wrap now supports exactly that.

20:27 - It is possible to choose the start-up modes when initializing the application. Users expect applications installed on their devices to have consistent behavior, regardless of technology. When inside the Trusted Web Activity, the geolocation permission can now be delegated to the operating system. When enabled, users will see the same dialogues as apps built with Kotlin or Java, and find controls to manage the permission in familiar places. One of the advantages of using Trusted Web Activity is that the output binaries are small.

21:00 - Bubble Wrap has been optimized and now generates binaries that are 800K smaller. That’s less than half the size generated by previous versions. Bubble Wrap is also now capable of generating app bundle files, which is a different format to package an app when publishing to the Play Store. Developers should now prefer this format, as it will be required by the Play Store starting in the second half of 2021. As a developer creating an Android app, you will need to update your app on the Play Store from time to time.

21:31 - Reasons might include new Trusted Web Activity features or changes to the Android ecosystem. To update an application, start by ensuring you have the latest version of Bubble Wrap, then run Bubble Wrap updates to apply the latest application template to an existing project. Finally, run bubblewrap build to generate the application binary. The quality criteria has changed on Chrome 86, and errors in the web app are now delegated to the Android app. This ensures that developers are either handling those errors properly in the web app or that the application crashes in a way that is consistent to other applications on the device, regardless of technology.

22:14 - It’s implemented for three different situations. First, when validation of the relationship between the web app and the Android app fails. Second, when the app fails to handle the offline scenario and the Chrome dinosaur would be displayed. Finally, when opening pages that cause the server to return a 404 or 500 error. The best way to handle those errors is to prevent them from happening in the first place, and you can use a service worker to do just that.

22:46 - This snippet shows a Workbox plugin that handled those scenarios. Learn more about creating custom strategies for Workbox on the Extending Workbox talk by Jeff Posnick. Finally, for more information on the quality criteria changes, check out the linked blog post. We received feedback from developers worried about devices where Chrome is not installed or is not the user’s preferred browser. With Samsung internet adding support for Trusted Web Activity from version 13 and above, we are happy to say that most browsers on Android now support Trusted Web Activity.

23:22 - Besides meeting requirements when selling digital goods, Google Play billing enables developers to remove friction from purchase flows. And we are happy to announce that, starting on Chrome 88, developers building with Trusted Web Activity will be able to combine the payment request API and the digital goods API to implement purchase flow via Google Play billing. My colleague Adriana will tell us more about how to use the digital goods API on the web app and configure it on the Play Store. Over to you, Adriana. - Thank you, André. As André mentioned, the digital goods API gives you access to tools in the Play Store that make managing digital goods purchases easier for you and for your users. To use the Play billing payment flow, you’ll need to configure your catalog on the Play Store, as well as connect the Play Store as a payment method from your PWA.

24:17 - To walk you through the process, let’s say I have an app to create the perfect digital banana bread. I have created the Trusted Web Activity and added the listing to the Play Store. Let’s start by configuring our product catalog. In the Play Console on the left-side menu, find the Monetize section, and then go to Products. There, we can create in-app products or subscriptions. For our example, in-app products would be things like bananas or nuts for the bread. Here, you can see a list of existing products and use the Create Product button to add a new one. We’ll need a product ID, a name, a description, as well as a price. It’s important to create meaningful and easy-to-remember product IDs. You’ll need them later, and IDs cannot be changed once created.

25:16 - For the product to be available for purchase, after creating it, you will also need to activate it. We could also offer special features like secret ingredients via a subscription. Subscriptions are also set up in the products section. You’ll need to add product ID, name, description, price, and a billing period. You can also list the subscription benefits, and if you want, you can add a trial period, an introductory price, and a grace period for your subscription.

25:57 - You could also insert and manage products using the Play developers API. Check out the link on the screen for more information. When the user is ready to purchase the product, you’ll need to use the web payments API. When creating the payment request, you’ll need to specify the Play Store information in the supportedMethods field and provide the product ID in the data.sku field. Once the transaction is complete, the response from the payment request submission will provide purchase details. That includes a purchase token.

26:40 - To complete the transaction, you must acknowledge the purchase with a snippet like this. The acknowledgement lets the Play Store know you received a successful purchase and have given the user the corresponding entitlement. Note that the purchase token received from the Play Store is not automatically associated with the user. The developer must handle giving entitlement for the purchases and saving the token linked with the user to their back end. This token will be used to verify if the purchase is still valid or if their subscription is still active and give the user the appropriate entitlement. I just described only the basic flow.

27:29 - Look out for a companion blog post with more details about Play billing for web. It will include details about managing your subscriptions, using real time developer notifications, and more information about Google Play Developers API. In other news, from Chrome 85, Trusted Web Activities can also be installed on Chromebooks. For the best experience on ChromeOS, your Trusted Web Activity should not include any native Android code. If you want to restrict your app to be installable only on Chromebooks, you can manually add this flag to your Android manifest.

28:09 - And if you use Bubble Wrap, we have an option that will add the configuration for you. One last reminder. Every app that is listed on the Play Store must comply with the Google Play Policy. Check out this video for more information. Capabilities like handling full screen and delegating geolocation, as well as new features in Bubble Wrap, make integrating your PWA with the Play Store better than ever, and Play billing gives users a familiar and easy-to-use checkout flow. We hope you give them a try and send our way your comments or questions. Thank you.

28:57 - (upbeat music begins) (beep) - Ladies and gentlemen, we have another Wikipedia race coming up right now. Are you chaps ready? - Yes. - Yep. - Your target page is standard deviation. - Oh, I hate you. - Standard deviation? - Right, I was hoping that was something about mathematics but there isn’t, so let’s go with JavaScript, math. - [Referee] Ah, I see. - We’re doing exactly the same thing. - [Referee] There’s code samples, very helpful to find other mathematics-related articles. - But it’s not really helping because they’re not linked to the word math or maths. - [Referee] I mean, that would be… - Oh no! Okay, I’m going to try a different language of math.

30:29 - - I see both participants are engaging in wild Control F-ing. - Yeah, this is hard. - [Referee] We have arrived at Java, which is not that big of a- - I went to Java and then I came back because I really am struggling, hang on. Got to go to standard deviation, right? - [Referee] Jake has arrived at Lua, which is also a questionable choice at this point. - Yeah. - [Referee] Reading about the syntax of Lua. Ah, we are slowly abstracting from concrete languages to abstract concepts. Maybe this is the way to go. - Why is… - Right, I’m going off the wall. This is it.

31:10 - I mean, Department of Computer Science, University of Cambridge. This is just… - [Referee] Paul is still comparing languages. - Okay, okay, okay. - Okay, I thought you got it then. - [Referee] Ah, looking for the word math in the antic about MathML. - Yeah, but that was fine. I’ve gotten to mathematics. Standard. Aw, come on. How do you not have standard deviation in mathematics? - [Referee] That is a good question. Maybe something worth editing on the Wikipedia page. Ah, we have arrived. Paul’s at statistics. That seems like it’s fairly close. The pressure is on for Jake.

31:49 - - Standard deviation! Where’s my mouse cursor? Yes! - Ah, that was such a close call, but it was just fast enough to deny Jake his only chance at winning. - Yes! - I hate you so much. I hate you so much. Oh, I’m so happy I denied you the victory. - Tough, good fight, good fight. (laughter) - I think this one is complete rubbish, so you have to get to the Wikipedia page: landfill. Off you go. - Oh, rubbish? - [Jake] Yes. - Okay. Okay. I’m going to go with… - [Jake] I’m impressed at how quickly he moves around pages. I can see why I lost this terribly. - Okay. - [Jake] What are your strategies? - I’m not telling that.

32:44 - - [Jake] Well, it makes for a very boring video. Let’s all just sit here in silence, then, until one of you finds the page. - I’m trying to go via agriculture, but for some reason, there is no dedicated link. - Landfill, got there! What? Oh, I’m so happy with that. - Tell me your path. - [Jake] How did you do it? - Okay. Let’s go backwards. I’m going to have to rewind this. I’m actually quite pleased with this path that I went through. I went Web 2.0. Went to JavaScript library to JavaScript, to V8 the JavaScript engine, figuring, garbage collection. - No way that worked. - So that led me to the disambiguation because it has a disambiguation click at the top. So I went to the disambiguation page, the top one, garbage collection or waste collection as part of municipal waste management. Brilliant. Great. So then I went to the waste collection, and there is landfill. - That’s incredible.

33:40 - That is a path I would not have thought would ever work. so you deserve that win. - I’m so unbelievably chuffed with that one. (upbeat music) - [Instructor] Hi folks, we are going to take a quick tour of some ways of extending Workbox. By the end, you’ll be writing your own strategies and plugins and hopefully sharing them with the world. But first what’s Workbox? If you’ve never heard of it before you could follow that link to the docs, and then come back to this video.

34:40 - At its core Workbox is a set of libraries to help with common service worker caching scenarios. And when we’ve talked about workbox in the past, the emphasis has been on common scenarios for most developers the strategies that Workbox already provides will handle your caching needs. Workbox includes ready to use strategies like stale-while-revalidate where a cached response is used, respond to requests immediately while the cache is also updated so that’s fresh the next time around Other common strategies like network-first falling back to the cache are available for use as well. But what if you wanted to go beyond these common scenarios? What if your use case involves more arrows? I’ve got you covered. Let’s go beyond the common caching scenarios and talk about customizing Workbox to help with all your caching needs.

35:46 - First let’s cover writing your own custom caching strategies. Workbox version six offers a new strategy based class that sits in front of lower level APIs like fetch and cache storage. You can extend the strategy based class and then implement your own logic in the underscore handle method. Let’s go back to that diagram with all the arrows. This represents a strategy that can handle multiple simultaneous requests for the same URL by de duplicating them.

36:21 - A copy of the response is then used to fulfill all the inflight requests, saving bandwidth that would otherwise be wasted. And here’s all the code you need to implement that custom strategy. Just a quick note, this code assumes that all requests for the same URL can be satisfied with the same response, which one always be the case of cookies or a session state information comes into play. I’ll link to this code later, but there are a few things I wanted to call attention to. This class extends an existing strategy network first and adds in some additional state mapping, the inflight requests to their corresponding response promises.

37:07 - It overrides the underscore handle method, checking to see if there’s already an inflight request for the same URL. If there is, the strategy will wait until there’s a response for that earlier request. But if there isn’t already an inflight request for the same URL our strategy just calls the underscore handle method on its parent class to get a response and adds to the mapping of inflight requests and cleans up after itself once the response is received. And that’s all the code you need. Here’s another example of a custom strategy. This is a twist on still while revalidate where both the network and cache are checked at the same time with a race to see, which will return to response first.

37:59 - Let’s take a quick walkthrough code that implements that strategy like before our class extends a base. In this case, we’re extending a generic strategy class that Workbox provides. This is a good starting point when you need more control over the sequence of network and cache look-ups all of our response generation logic is in the underscore handle method, which has passed two parameters, the browsers request, and then a handler parameter, which is an instance of the strategy handler class. Although it’s not required, it’s strongly recommended that you use the handler parameter to make network requests and interact with the cache like we’re doing here. Those handler methods will automatically pick up the cache name you’ve configured for the strategy, as well as invoke the plugin lifecycle callbacks that we’ll talk about in a bit, A strategy handler instance provides four handler methods.

39:04 - There are fetch and cacheput along with the two others we saw on the previous slide. Writing a Workbox strategy class is a great way to package up response logic in a reusable and shareable form. You can drop any of these strategies directly into your existing Workbox routing rules. And a properly written strategy will automatically work with all Workbox plugins as well. This applies to the standard plugins that Workbox provides like the one that handles cache expiration, but it also applies to plugins that you write yourself because another great way to extend Workbox is to write your own plugins.

39:46 - So what is a Workbox plugin and why would you write your own? Let’s go back to the diagram we looked at before, adding in a plugin, doesn’t fundamentally change the flow of this diagram, but it allows you to add an extra code that’ll be run at critical points in the lifetime of a request. Like when a network request fails or when a cache response is about to be returned to the page. Each plugin responds to one or more life cycle events, which are invoked when a strategy handler interacts with a network or cache. You’d see a list of a few of the existing life cycle events here. We’ll link to a webpage for this information later on as well.

40:35 - Workbox version six has a number of additional life cycle events that plugins can react to all corresponding to different stages in a strategy’s life cycle. Let’s combine a couple of those life cycle callbacks into reusable plug-in that provides a fallback whenever a strategy would otherwise generate an error response. This class implements two lifecycle callbacks fetchDidSuceed and handlerDidError It can be added to any strategy class. And if running that strategy does not result in a 200 OK response, I’ll use a backup response from the cache instead. My colleague, Andrea talks more about using this plugin within a trusted web activity in his Chrome Dev summit talk and you can watch for that for more details.

41:30 - Now that you know more about custom strategies and plugins, you might be wondering which one to write for a given use case I think a good rule of thumb is to sketch out a diagram of your desired request and response flow. If your diagram has a novel set of connections, like all these extra arrows, then that’s a sign that a custom strategy is the best solution. Conversely, if your diagram ends up looking mostly like a standard strategy, but with a few extra pieces of logic injected at key points, then you should probably write a custom plugin Whichever approach to customizing Workbox you go with I hope this talk has inspired you to do the following. Write your own strategies and plugins, and then release them on NPM tagged with Workbox-strategy or Workbox-plugin. Or just share them with the rest of your organization.

42:25 - You can learn more, including a detailed look at all the sample code and events by visiting web.dev/extending-workbox. Thanks to everyone for tuning in Now go out there and extend Workbox and share what you build. (lighthearted music) - This short video is about brand new capabilities your web app can use to communicate with hardware devices over a Bluetooth, USB, NFC, Serial, and HID. This is opening up so many opportunities, but if your current thinking is what or why, I bet you’re going to learn some stuff today. I’m Francois Beaufort, and in this video, I want to show you what you can do with those device APIs, how they work, and what developers have delivered so far.

43:22 - But first, why are we building these APIs in Chrome? We want to make it safe and easy for you to connect to real-world devices. From affordable NFC tags to specialized HID devices, flashing a new OS to your phone, or even programming a microcontroller, but ultimately by bringing those APIs to the web platform, more users will be able to interact safely with existing and upcoming devices. I believe we should not have to install sketchy binaries to control hardware devices. And here’s how Chrome protects us: First, a web app can’t access a device without user permission. The user has to specifically select the device, then grant to web app device access.

44:08 - Unlike native apps, web apps are not allowed to list all connected devices. Cross-origin iframes can’t, by default, prompt users. And of course, HTTPS is required for integrity reasons. In 2015, we added the Web Bluetooth API, so that web apps could connect to nearby Bluetooth Low Energy devices and interact with them through the GATT protocol. Think of heart rate monitors, toys, sensors, bulbs, and many more.

44:38 - In short, you could read and write some characteristic on the device and get notified of changes. This code, for instance, is all you need to read the battery level of a user-selected Bluetooth device. Gordon Williams, the lead developer of Espruino, an open-source JavaScript interpreter for microcontrollers, took advantage of Web Bluetooth to build the web IDE. - We want to allow users to get started really quickly. They can get one of our devices, go to a website, and start writing code in seconds, without having to install anything.

45:10 - And that’s only possible because of Web Bluetooth and web Serial. Going to a website is faster and safer for users than installing an app. But even if you wanted to make an app, there’s no universal API that works on Windows, Mac, Linux, ChromeOS, Android, and iOS, that we can use for Bluetooth and Serial. With Chrome’s device APIs, we can have one JavaScript code base that will be used by all platforms. And the API is used by so many people that they’re well-tested, reliable, and stable.

45:39 - We don’t have to maintain builds for each different platform we support either. And software updates can be done by changing the website, rather than having to manage app store submissions for every platform. - You can learn more about Web Bluetooth by checking out links in the description. Web USB is one secure way of accessing user-selected USB devices from a web app, without having to install drivers via privileged executables. Think of easier firmware updates, mobile device screen recorders, ledgers, oscilloscopes, and so on.

46:13 - My personal favorite is the official Android Flash Tool that allows you to flash a new OS to your Android device entirely from the browser. You can learn more about Web USB by checking out links in the description. More recently, web apps on Android devices can now experiment with reading and writing to NFC tags when they are in close proximity to the user’s device. I’m thinking museums and art galleries could show some additional information about a display when the user touches their device to an NFC tag. Or an inventory management web app could read or write data to an NFC tag on a container to update information on its content.

46:55 - Web NFC is currently limited to a binary message format called NDEF, which works great across different tech formats. First, prompt the user to allow your web app to scan NFC tags, then handle incoming NDEF messages by subscribing to reading events. I built a Web NFC card demo game last year. It was a lot of fun, and the code was really simple. I was only using serial numbers to identify NFC cards. Writing to NFC targets is similar to reading. This time, call the write method with a string to write some text, or pass a dictionary with an NDEF message to write some UART, for instance. You can find everything about Web NFC at web.dev/nfc. Oh, and I want to say thank you to the folks at Intel for their work. They are doing an amazing job. The Serial API is one of those device APIs that reminds you that the world we’re living in today still relies on legacy devices that are critical to our lives. And Henrik Joreteg will tell us why.

48:01 - - I want you to pretend for a second, that you are an anesthesiologist, which means you put people to sleep, and it’s your job to make sure that they stay alive during this process. And you do so by hooking up a patient to something like this, a patient vitals monitor. Now, in addition to actually putting the patient to sleep, it’s your job to track everything, so every five minutes, you have to take a reading off of this device and put it into something called a sedation record. And a lot of people these days are doing that with pen and paper. So naturally, we built a progressive web app at anesthesiacharting.com, that does this for you.

48:37 - It not only helps you produce the record, but we can actually use Web Serial to pull the vitals data directly off of these monitors and put it into the record. It’s really simple. We get instant cross-platform distribution without having to ask these doctors to go install some software. They can just go to a webpage, plug in their device, and they’re off and running. It’s beautifully simple. It turns out there are thousands of these devices out there that costs thousands of dollars and were completely fine, but we can give them new life by just being able to control it and being able to pull the data off of them. Currently, most of them aren’t even hooked up to anything.

49:13 - So this is really awesome, being able to take advantage of the investments that they’ve already made and really bring it back to life. - In other words, the Serial API bridges the web and the physical world by allowing web apps to communicate with Serial devices, such as patient monitors but also microcontrollers, 3D printers, Arduinos, Raspberries, and the list goes on. To listen to the data coming from a Serial device, first, prompt the user to select to select a Serial port. Then, wait for the Serial connection to open at a specific baud rate. And finally, within a simple while(true) loop, call the read method from the ReadableStream reader object. And that’s all.

49:58 - How to write to a Serial port and all details about the Serial API are available at web.dev/serial. There is a long tail of human interface devices, also known as HID, such as alternative auxiliary keyboards or exotic gamepads, that are just too new, too old, or even too uncommon to be accessible by a system’s device drivers. Lucky for us, the Web HID API provides a safe way to access those. Olivier Gay from Logitech will share why it matters. - At Logitech, being able to connect our devices with most systems and OSs, including the web, is important.

50:37 - Technologies like Bluetooth, HID, or USB, are the core for Logitech mice and keyboards. We have done some nice demo web apps and tracking with our devices. For example, we have done a demo where a device will update instantly from the web. And it works great. We have been contributing to these web hardware APIs since the beginning, participating in forums, and submitting patches. With Web HID, we also quickly built a web app to pair the devices through our unifying Nano.

51:03 - It’s easy to communicate with devices using Web HID API by writing just a few lines of code. We originally brought this web up for Chrome browser. Last year, the first beta of S Ledger was released. When we tried our web app on Edge, we had the nice surprise it was immediately working. That’s the cross-platform nature of the web. You can have just one code base, and it works on different systems. - Now, let’s have a look at what it takes to make your Apple keyboard backlight blink. Yeah, just like this. Prompt the user to select the keyboard backlight HID device. Then, wait for the connection to open, and finally, send some feature reports with bytes that contain instructions to turn on and off the backlight. This is a simple example, but it shows you how quick and easy you can get started with Web HID. Have a look at web.

dev/hid to learn more about the API, 51:55 - and play with samples. And here we are. I deeply believe the web platform is getting better for everyone with hardware devices support. It gets better for kids who love hacking devices with friends, it gets better for developers, as the barrier to entry is lower, and they can now provide a solution that works cross-platform. It gets better for users, as they don’t have to install drivers via privileged executables. And most of all, it gets better because it allows old and new devices to coexist in an open platform. Thanks for watching.

52:43 - (buzzing) (escalating buzzing) (alarm) (clicking) (clink) (poof) (alarm) (splash) (alarm) (clicking) (upbeat singing) (beep) - So we have been filming these little interstitial videos all day, and we are running out of ideas and patience, so we’ve come up with a final game. And what we’re going to do is we’re going to see how long we can not blink for. Was that the correct grammar to use? - Not really, but the words were all there. You just have to make them work for yourself. Don’t blink, any of us, for as long as possible.

55:19 - If we miss a little mini blink, then, you know, I’m sure everybody will hold us to the high standards of honesty that we expect from ourselves here. - You know hyperventilate. - I’m trying to get a quarter of blinks. - I’m just saying, you know how you hyperventilate before diving because it allows you to hold your breath longer? - Oh. I wonder if the same thing applies to blinking. If you hyper blink… - Hyper blink. -Yeah. All right. - There’s going to be somebody out there - who’s going, “That’s the worst thing to do.” They’ve all completely- - You’ve just lost three years of your life span.

55:48 - - Especially with a big light in my face here as well. - Yeah, that’s not helping. - Okay, Here we go. Are you ready? You ready? - Yeah, I think so. - Yes. - Okay. - I’m ready. - Three, two, one, go. Oh, I kind of went for a huge eye open at the start there, which I feel like I might have… - I blinked. (laughing) - Well, you’ve got to be staring at something. - Well. Yeah. I was just… - My eye is twitching. - Moving your eyes is fine. - This is really hard. - Oh, you would think this would be easier than this, but my eyes are properly stinging now.

56:25 - - I just enjoy watching your frozen faces staring into the camera. - Ah. - So… - The light is drying out your eyes as well. - It is because, I mean, obviously to film this, we’ve got a huge- well, I’ve got a huge light right there. - So have I. - Oh, I blinked. I blinked. Ah. - Ah. - I have still not blinked yet. I think I haven’t anyway. Oh, there I go. - Oh. That was the nicest blink I’ve ever had. (laughing) - Unless I’m much mistaken, I don’t think I blinked there throughout. I’ll be interested to- I might just watch that back from myself at some point.

57:03 - - It turns out, Paul, we have a recording that you can re-watch on loop of you. - I’m just blown away. - This might be one of the most uncomfortable GIFs we can make. (laughing) - Just staring. - And judging. - Well, we’re definitely out of ideas then. - Yeah. - Yeah. That’s enough from us. (beep) (upbeat music) - Hi there and welcome to this short introduction into Structured Data for Developers. Let’s start right at the beginning. What is structured data, and how does it benefit your website? In short structured data is an additional machine-readable piece of information, that you can put on your website, to tell machines like Googlebot, more about the content of that website.

58:06 - There’s a wide variety of standardized ways to identify and describe your content across various verticals. The list on the slide here, is just a small selection. For the full list of verticals that are supported by Google search. Check out the link here. So if you’re wondering why should you add this machine readable semantic data in the first place? You might have actually seen some of the things that are powered by it. Many different vendors, including Google, use this information to highlight your web content in many places, it would otherwise not show up.

58:44 - Let me give you a few examples just from Google side of things. And again, keep in mind there are more vendors and products that use this data. When you search for recipes, for example you may see these results with a picture ratings and other information. We call these Rich Results. These are not the only type of highlighting your web content might get in Google search. Content from any of the other supportive verticals such as books may show as specialized rich results too.

59:19 - Note that implementing structured data, is not a guarantee though. But it is a necessary step to make a website illegible for rich results. One format of specifying structured data. And the one we recommend is Json-LD, short for JavaScript Object Notation, Linked Data. This can be added as a script tag on your website with the required properties, and the information on your content. Okay, so now we know what structured data is, why it matters and what it looks like.

59:53 - But how do we get this into our websites? We can, of course just put it in our website, statically as part of the HTML or using our content management system. Luckily the handling of structured data in Google search is also part of the indexing process. So it does benefit from the JavaScript rendering, allowing us to also specify the structure data dynamically. However, this may vary between different vendors, and we do recommend to have a server side solution as those tend to be more reliable and robust. If you were to use client-slide JavaScript an implementation could look something like this.

00:36 - Here a script tag is created, filled with structure data coming from an API response, and injected into the document. That is a viable way of working with dynamic structured data. Alternatively, you may use Google Tag Manager to grab the information from the page, and inject it into structured data templates. That then go on the page in Google Tag Manager. There are many ways of getting structured data into your pages but most importantly, you have to test your implementation to make sure your pages are eligible for rich results. So let’s look at testing for a moment.

01:17 - The best go-to tool to test implementations during development or Ad Hoc if you’re interested in rich results, is our Rich Results Test. It supports, copy and paste of your code, or taking a URL to test. It shows you of any markup that is used in rich results was found. What the rendered HTML looked like. As well as if the markup fulfills all the requirements necessary for the rich results. If you wanna monitor your structured data across your whole site or multiple sites. You can use the free Google Search Console. It provides you with reports for each type of structured data that was detected on your pages, along with statistics on possible improvements and errors. Last but not least using certain frameworks and language features, here we see a react web app built with TypeScript, allows you to get warnings and errors at development time. All right, that was a lot to take in. So let’s summarize this real quick. So structured data is additional machine-readable information about the content of the webpage. That information is standardized by the schema.

org community body 02:28 - and used by various vendors and their products including Google Search to highlight your web content in various ways. For Google Search, it’s fine to inject this information using client-side JavaScript. If you wanna learn more, check out our intro guide on structured data in Google Search. All right thanks a lot for watching, and I hope it made you excited for structured data. Stay safe, have a great time, bye, bye. (upbeat music) - Hi, my name’s Terry Ednacot, and I’m the program manager for Google Web Creators.

03:12 - Today I wanna talk about how we can make the web more visual with Web Stories. Many of us are already familiar with stories. They’re always full screen, portrait and immersive. Stories allow readers to tap to go forward and backward, and you drag up to open a page attachment. And of course, stories allow you to swipe to go to the next story if there is one and so on.

03:34 - Web Stories keeps this consistent experience that you’ve come to love, but offers some great benefits to you as a creator and reader. So why has the style of producing content become so popular? Current implementations focus on ephemerality and ultra low barrier to creation. But our bet is that the stories format works beyond the ephemerality use case and can become its own pillar in the web, in the media landscape. So now that we understand the common characteristics of a story format, let’s take a closer look at Web Stories. Web Stories are a web-based version of the popular stories format that blend video, audio, images, animation, and text to create a dynamic consumption experience.

04:14 - So why should you create Web Stories? One, for creative control. Web Stories are entirely under your direction, just like any other content on your website. Each story can be designed to fit your brand. Two, they’re monetizable. Story authors retain 100% of any monetization with ads or affiliate links, and are in full control of hosting, sharing and linking to those stories. Three, they don’t expire. The stories live on your website, as long as you want them to.

04:45 - Tappable stories have become a major part of the way consumers engage with content. In fact, 60% of weekly mobile content users now consume tappable stories daily on social media. We’ve seen studies that show that they’re more engaging than a text article. There’s a unique opportunity for publishers to reach their audiences with a familiar tappable stories format. Web Stories bring that format to the open web and allow publishers to take control of their content without being confined to a single ecosystem.

05:15 - Stories are also more structured and cheaper to make than video. They can be ranked, they can be crawled and they’re scannable as a text article. A fundamental characteristic of the web is openness, specifically open access, consumption and creation. Simply put Web Stories are web pages. And since Web Stories live on the web, they don’t just live in an app, but can be accessed anywhere the web can be accessed. Sharing a web story is as simple as sharing any other link on the web.

05:45 - Not only are stories easy to share and embed but you can also link to your other content. We’ve already seen other publishers linking to stories from their home pages on social channels and in newsletters. And because of that, we’re seeing Web Stories adopted by a range of publishers around the world as a medium for a broad range of content. From the topical and deep, to the fun, personal and interesting. We’re also seeing Web Stories adopted by independent creators for their sites via creation platforms, and by brands for user engagement and as immersive learning pages for ads.

06:19 - So now that we understand what Web Stories are, let’s talk about where you can see them. Stories are web pages and can be seen anywhere a webpage can be seen, but at Google, we’re going the extra mile to let them shine. Just recently, we announced a new home for Web Stories on Discover. Discover is a browsable feed experience for users to stay up-to-date with the topics, trends, publishers, and creators they care about. The Web Stories carousel on Discover showcases some of the best visual content from the web.

06:50 - You might ask, how’s Discover different from search? With search, users enter a search term to find helpful information related to their query, but Discover takes a different approach. Instead of showing results and response to a query, Discover surfaces content, primarily based on what may be a good match based on the reader’s interest. In addition to Discover, we continue to service more Web Stories across Google Search results globally on mobile. So at this point, you’re hopefully either interested in consuming stories or making your own. Building a web story can be broken up into five simple steps.

07:26 - Step one is to select a story creation tool. Just like for regular web pages, it’s important to have the right tools at hand to create Web Stories, tools that help you speed up the content creation process and allow you to achieve better results, thanks to visual editing and guidance along the way. For most people, drag and drop tools like the web story editor for WordPress make stories and user AI make it easy to create Web Stories and minutes to your website. These editors require no coding skills and pages can be dragged together and configure it similar to a design tool. But you’re not most people, the chance is very high that you’re a web developer.

08:05 - You can use it to your advantage and hand code stories from scratch, which will give you more flexibility and a chance to play around with features that’s not yet available in the what you see is what you get tools I just talked about. Step two is to draft your storyline. Before sitting down and creating your story, you need to do your homework to find a concept, write a compelling story art, and source creative assets. Now comes to the hardest part, sourcing the right content. The right imagery or videos are the make or break of a good web story. If you’ve been blogging on the web before, this is kinda counterintuitive.

08:39 - With a typical blog post or article, you start with a copy and then find some image to visualize a point. With a web story, you start with the image and then write some copy to clarify your point. Step four, make it your own. Take the time to customize your story and make it feel like your personal brand. And don’t worry, the more you use these editors, the faster you can create stories. Number five is to optimize it. Learn from engagement stats so your next story reaches an even greater audience.

09:11 - Google Search Console will provide additional details about how your Web Stories are performing in search and discover. And lastly, join the Google Web Creators community. The web is an amazing and enormous platform, but creating for it can be difficult and it lacks a support community. Google Web Creators exist to provide tools, guidance, and inspiration for people to make awesome content on the web. You can check us out on our blog, Twitter, Instagram, and YouTube.

09:40 - To summarize Web Stories, keep the great UX you already know. Stories meet the widest audience ever, the web. And since these stories live on the open web, it provides content creators and publishers an immersive format for delivering content and visually rich Tap through and easy to share experience. But there’s more. Great story experiences are coming to the web, and we’re making them so much better. We’re building out our rich open source story player to help you integrate Web Stories into your site experience.

10:12 - We’ll help you craft the right user experience for your stories, ranging from highlighting contributors to an interactive grid or a story carousel. Once the users start consuming stories on your site, they’ll be able to continue swiping to your other stories and react to them just like any other content on your site. Getting started with a Web Story player is easy. A few lines of code will get you up and running with a rich swipeable experience. You can also embed a Web Story within an article, customize the immersive consumption experience with your branding, and even personalize what story to show next to your users in real time.

10:49 - And of course, we’re making the core story experience itself richer and more interactive with a number of new components. You can integrate interactive quizzes to engage your users and build a sense of community, and even poll them for their opinion on the questions that really matter. We’re also continuing to help you bring stories to life by integrating rich support for 360 images and videos. You can direct your users to points of interest or allow them to explore the space on their own. And we’re just getting started. We’re working in innovative ways to take storytelling to new heights.

11:23 - Soon, you’ll be able to embed audio from services like SoundCloud or Spotify, allow users to easily browse all your story content, enable your users to interact more deeply with products, and wow them with even richer animations and immersive transitions. To end, here’s the list of resources and our social handles so you can stay connected. We can’t wait to see what stories you tell. Thank you for your time. - Hello, I’m Paul Kinlan. I hope that you’ve enjoyed the last two days. We’ve had lots of fun creating the content, and we had a great time meeting you in our virtual experience The Chrome Dev Summit Adventure.

12:05 - The web is a vital part of people’s day-to-day lives. And during these last nine months, we’ve seen a shift from physical moments to everything moving digital. And we’ve seen a massive increase in the usage of the web. I know that many of my team, like many of you, have been and will be celebrating your holidays away from your friends and family. And we believe that the web can help you bring those together.

12:26 - Our goal with this event has been to show you how you can help connect people around the world with experiences that they love to use. And as you build great sites and apps, there are three core areas that we hope you’ll focus on in the next year. As the world moved digital, your users’ safety and privacy is more important than ever. We want to make it harder for sites and services to track you without breaking critical use cases like sign-in, payments and monetization. In January, we announced that we’d be changing the way that cookies work on the web, restricting them to first party by default and requiring developers to explicitly state when they can be used in a third-party context, all via the same-site attribute.

13:06 - This was done with the goal to fundamentally enhance privacy on the web. This and a number of other privacy-focused changes have rolled out across Chrome with more in the pipeline. And you can use our tooling to help you identify these issues and fix them. Moving on, it’s long been known that the performance of your site or web app has an impact on your user’s experience. We believe that one of the biggest investments that you can make in the next year is to focus on your Core Web Vitals.

13:34 - The Core Web Vitals program kicked off with three metrics across loading, interactivity and stability with the intention to give you direct feedback into how your users experience your site. These metrics let you more easily target where to put your team’s effort when focusing on building a better experience. And even more recently, the search team announced that a page’s Core Web Vitals and a range of other signals will be considered when ranking results in search. And finally, the web platform has become increasingly more powerful, allowing to build rich and capable experiences that work the way people have come to expect, but without an install. Take a look at these new APIs and see how you can progressively integrate them into your site to build better experiences that have never been possible on the web before.

14:19 - I look at the last 10 years of Chrome, and I’m inspired by how closely we’ve kept to that early mission of building a browser that meets the need of the people using the web whether it’s for sites or rich web apps. But the world has changed in the last 10 years. Mobile became a major focus in the industry with the introduction of the iPhone and then Android, and mobile has brought computing to billions of people, changing the way that people think about using the web. So with that in mind, I’ve asked Alex Russell, one of the leads for the Web Platform Team and the tech lead for the Project Fugu, to give us some insight into the properties that make the web so unique and why we feel our mission is so critical and how we think about moving the web forwards. - Thanks, Paul. Chrome’s mission has always been to move the web forward, but what direction is forward? That’s a question I spend a lot of time thinking about.

15:10 - As computing has shifted to mobile, the fundamental environment for software has changed in ways that are deeper than whether or not we click or tap. So how can a platform like the web adapt? The press sometimes focuses on small differences and exclusive features, which are by definition niche while they’re new. If the web supported those features on day one, damn the portability, would that be forward? Not exactly, but neither is freezing the set of things the platform can do any time. There wasn’t a perfect year for the web. So how do we choose what to add? Can we come up with a model that helps guide us? I think we can. First, we should recognize that the web is a meta platform.

15:51 - Like Flash or Java, the web (indistinct) over operating systems and frameworks to enable portable content that isn’t locked into a single OS or product. This portability enabled by good licensing terms, lets your sites run on any device with a modern browser. Meta platforms aren’t always successful though, despite this advantage in portability. To understand why some go the distance and others fade, I like to think about how developers place bets. Imagine you’re the tech lead or product manager of a team starting a new project for a client.

16:21 - Your team has a set of existing skills, but those aren’t necessarily a hard constraint. If the entire industry is going to a new framework or operating system, the new project is as good an excuse as any to learn it. The capabilities of those platforms are table stakes however. If a core requirement of the project can’t be met, it doesn’t matter how cheap it is to build the UI. You simply won’t choose that platform. The same might be true if you fear it won’t keep up with you.

16:48 - Your chosen approach also needs to deliver value. Building once and customizing perform factor can be much cheaper than rebuilding the same experience multiple times and it can help keep your iteration velocity up. The ability to AB test matters a lot. Over time, clients will want to work with firms that can deliver more for less. Now, mentally zoom out from the single project and decision point. Think about the thousands of new projects that get started every single day. Now drive forward through time.

17:19 - A month, then a year, then the full five-year life of a project. Each one has folks who’ve made, are making or will make the same choices with perhaps slightly different constraints, timelines and experiences. As time flies by, we can see platforms rise and fall one project choice at a time. Each choice persists for the full life of the product, but choices about how to rebuild are made years apart in potentially very different worlds. No platform can continue to succeed if it doesn’t win business in the intervening time too.

17:54 - These choice moments can feel disconnected but are in fact bound together. Can the system you bet on continue to reach your users while continuing to enable the lion’s share of things you want computers to do? Or do they fall off the trend line and become legacy systems? It’s this list, the set of things that most computers can do that is foundationally important and yet so often obscured in our conversations by shiny niche features. Systems like the web are always giving up a little capability in trade for a lot more reach. And when that formula nets out, developers and users win. But platforms that fail to grow at the same rate as the set of really common features become harder and harder to bet on as time goes by.

18:39 - For platforms, stasis is just a delay in delivery of the news that you’re a relevant. Platforms that stave off that irrelevance, do so by continuing to meet the choices of teams as they come up for air and consider their options afresh. The long arc of computing is that devices get more features every year at lower costs. That means the set of things that most computers can do expands inexorably at a pretty constant rate. As the set grows, applications that were previously niche or available only on proprietary platforms, transition to meta platforms if they can support them.

19:13 - That unlocks huge benefits in cost, security and reach. For instance, it wasn’t that long ago that the web’s historic strengths weren’t enough to make video conferencing work well in the browser. Handling mouse and keyboard events, rendering text and drawing boxes might’ve been enough to support news or email, but they didn’t get at enough of the features needed for video. Over time, prodded by plugins and proprietary tools, web went on a journey. First browsers got good at video. Next, we learned to talk to cameras and mics and speak low-latency networking protocols.

19:46 - Today those features are key not only to our endless 2020 video conferencing lifecycle but to a whole new class of applications. Closing the gap, let video conference move into the browser, but happily and perhaps accidentally, enabled desktop sharing and streaming games. These adjacent use cases turn out to happen every time we expand the platform. Stadia, GeForce NOW, and Luna combine things that the web has been good at for a long time with better codex and a few new capabilities like gamepad and HID device support. The result isn’t just email with video conferencing, but fundamentally new product categories that help keep the web ecosystem relevant.

20:26 - Productivity apps are undergoing a similar revolution as we add a few seemingly small capabilities that build on the great set of core features that the web has had. Access to fonts, local files and full board compute aren’t optional for products like Adobe Spark, Google Earth, SketchUp, Figma, and Photopea. But thanks to the web’s rich existing feature set and massive reach, helping them move to a better, safer distribution channel doesn’t require a whole new platform, just some targeted incremental additions. Platforms that choose to stay in one lane are inevitably picking the legacy lane. And when that happens, the vibrance and investment that they once attracted fades away.

21:06 - For the web to stay healthy, it does need to support more of our computing lives every year, but that doesn’t mean adding every geegaw and what’s it from the latest devices. Instead, we unlock a better future by steady integration of stuff that looks boring. But at the web scale and portability, boring can be revolutionary. - That’s interesting. So meta platforms have to adapt to survive and we’ve used mobile apps as a lens to focus our ideas for improving the web platform. For example, with more app-like features, like push notifications, sensor access and P2 way installation, Alex, maybe you could just dive into that a little bit more.

21:45 - How should we ensure we focus on the right platform improvements as people’s usage of the web changes in the years to come and how do we keep new features from undermining the value of the existing ecosystem? I know a lot of developers worry about the risks of adding new features. - It’s a great question. And the how is just as important as the what. Lots of platforms have fallen by the wayside because they took shortcuts and earned bad reputations for security. So nothing we add to the web can ever be allowed to undermine user trust. This is a key advantage for the web. Traditional software hasn’t given users much in the way of control.

22:19 - And legacy apps can often access every device attached to a computer or the sensitive files or internal network services with impunity. We know a better version of computing is possible because we’ve seen computers get easier and safer as we’ve moved more of our lives to the web and off of legacy closed platforms. Everything we do to open up new capabilities is of course a balancing act. Do users understand the choices we’re presenting? Can we limit the scope of permission grants? And how easy is it to revoke them? How can we flag that things are being used so that users can make revocation choices? And how do we keep those choices from overwhelming users? Operating systems are iterating in this space too, but the web’s low friction means that we have to meet a higher bar. I’m happy to say that across dozens of features that we’ve added in recent years, the care that we’ve taken so far has meant that the sky has in fact not fallen.

23:13 - The web’s power comes from making the devices you already have the best versions of themselves. We aren’t trying to get you to buy a new computer or a new phone, we’re trying to make computing livable, safe and capable. While we’re doing that, we also want to help you build a bridge from your legacy applications to that better world. Web Assembly and these new APIs are opening up opportunities to connect the past to a better, safer future, and to make software easier and more humane for users in the process. Some folks argue that most users don’t need these new features.

23:47 - Perhaps, but it’s a chicken and egg problem. And we can learn from this set of things that most computers can do, while being careful and intentional about expanding the web to meet those needs too. We can’t predict the next game streaming breakthrough, but we can follow the breadcrumbs and open up the next video conferencing by listening to you, our developers. We’re committed to growing the reach of the web, and that means helping you succeed both now and the next time you have to make a choice about which platform to build with. After all, the only way the Chrome team succeeds is if the web as a whole thrives. - Thanks, Alex.

24:21 - Our vision for the web started well before that first comic. We want to help you build great experiences. But we know that we can’t get there alone. The web is the most democratic platform. There is no one owner. So many people and organizations have worked across the ecosystem to help make the platform what it is today. From companies and my partners, whether big like Adobe, bringing Spark to the web, or an independent developer like Photopea.

24:48 - They all want to provide a better experience for their users by pushing the boundaries of what the web can do. Our Google Developer Experts who provide critical feedback so we can help make the platform work for the ecosystem and who help us get the message out to developers about what’s new and what’s possible. The browser vendors who work together to create consensus for the new features you need so they can be implemented in a safe and consistent way and you can write your code once for all browsers. And most importantly you, the feedback that you provide and the amazing experiences you create, inspire us to help push the web forwards so you can make something even better. Thank you for joining us over the last two days.

25:28 - We hope that you’ve enjoyed the sessions, but we’re not finished yet. You can continue interacting with our team in CDS Adventure, and we look forward to seeing you there. (upbeat music) .