Mozilla DevCenter
oreilly.comSafari Books Online.Conferences.
advertisement

Sponsored Developer Resources

Atom 1.0 Feed RSS 1.0 Feed RSS 2.0 Feed

Related O'Reilly Books





What Is Firefox What Is Firefox
Brian King provides a brief look at Firefox's origins and evolution, and then dives into its support for web standards like CSS and XML, its debugging and extension capabilities, and some cool new features in the upcoming 1.5 release. If you're considering a switch to Firefox, this article may help make the decision for you.


Mozilla as a Development Platform: An Interview with Axel Hecht  Axel Hecht is a member of Mozilla Europe's board of directors, and a major contributor to the Mozilla project. At O'Reilly's European Open Source Convention (October 17-20), Dr. Hecht will be talking about Mozilla as a development platform. O'Reilly Network interviewed Dr. Hecht to find out if the long-held dream of Mozilla as a development platform was about to come true.   [O'Reilly Network]

A Firefox Glossary  Brian King, with some help from Nigel McFarlane, covers everything from about:config to "zool" in this fun, fact-filled Firefox glossary. It's by no means exhaustive, but you'll find references to specific chapters or hacks throughout the glossary to Nigel's book, Firefox Hacks. When you're ready to dig deeper, check out his book.   [O'Reilly Network]

Important Notice for Mozilla DevCenter Readers About O'Reilly RSS and Atom Feeds  O'Reilly Media, Inc. is rolling out a new syndication mechanism that provides greater control over the content we publish online. Here's information to help you update your existing RSS and Atom feeds to O'Reilly content.  [Mozilla DevCenter]

Hacking Firefox  This excerpt from Firefox Hacks shows you how to use overlays (essentially hunks of UI data) to make something you want to appear in the Firefox default application, perhaps to carry out a particular function of your extension. For example, you might want to add a menu item to the Tools menu to launch your extension. Overlays allow existing Firefox GUIs to be enhanced.   [O'Reilly Network]

Mozile: What You See is What You Edit  Most modern browsers don't allow you to hit "edit" and manipulate content as easily as you view it, WYSIWYG-style. Mozile, which stands for Mozilla Inline Editor, is a new Mozilla plug-in for in-browser editing. This article by Conor Dowling provides an overview of Mozile and what in-browser editing means.
  [ Mozilla DevCenter]

The Future of Mozilla Application Development  Recently, mozilla.org announced a major update to its development roadmap. Some of the changes in the new document represent a fundamental shift in the direction and goals of the Mozilla community. In this article, David Boswell and Brian King analyze the new roadmap, and demonstrate how to convert an existing XPFE-based application into an application that uses the new XUL toolkit. David and Brian are the authors of O'Reilly's Creating Applications with Mozilla.   [Mozilla DevCenter]

Remote Application Development with Mozilla, Part 2  In their first article, Brian King, coauthor of Creating Applications with Mozilla, and Myk Melez looked at the benefits of remote application development using Mozilla technologies such as XUL and web services support. In this article, they present a case study of one such application, the Mozilla Amazon Browser, a tool for searching Amazon's catalogs.   [Mozilla DevCenter]

Remote Application Development with Mozilla  This article explores the uses for remote XUL (loaded from a Web server), contrasts its capabilities with those of local XUL (installed on a user's computer), explains how to deploy remote XUL, and gives examples of existing applications.   [Mozilla DevCenter]

Mozdev.org Made Easy  Now that mozilla.org is about to release Mozilla 1.2 and Netscape has come out with the latest version of their own Mozilla-based browser, Netscape 7, this is a great time to see what other people are building with Mozilla's cross-platform development framework. Here's a little history about, and a roadmap to, mozdev.org.   [Mozilla DevCenter]

XML Transformations with CSS and DOM  Mozilla permits XML to be rendered in the browser with CSS and manipulated with DOM. If you're already familiar with CSS and DOM, you're more than halfway to achieving XML transformations in Mozilla. This article demonstrates how to render XML in the browser with a minimum of CSS and JavaScript.   [Mozilla DevCenter]

Roll Your Own Browser  Here's a look at using the Mozilla toolkit to customize, or even create your own browser.   [Mozilla DevCenter]

Let One Hundred Browsers Bloom  In this article, David Boswell, coauthor of Creating Applications with Mozilla surveys some of the more interesting, and useful, Mozilla-based browsers available now.   [Mozilla DevCenter]

Using the Mozilla SOAP API  With the release of Mozilla 1.0, the world now has a browser that supports SOAP natively. This article shows you how Web applications running in Mozilla can now make SOAP calls directly from the client without requiring a browser refresh or additional calls to the server.   [Web Development DevCenter]





Today's News
July 22, 2014

Jennie Rose Halperin: Numbers are not enough: Why I will only attend conferences with explicitly enforceable Codes of Conduct and a commitment to accessibility

I recently had a bad experience at a programming workshop where I was the only woman in attendance and eventually had to leave early out of concern for my safety.

Having to repeatedly explain the situation to a group of men who promised me that “they were working on fixing this community” was not only degrading, but also unnecessary. I was shuttled to three separate people, eventually receiving some of my money back approximately a month later (which was all I asked for) along with promises and placating statements about “improvement.”

What happened could have been prevented: each participant signed a “Code of Conduct” that was buried in the payment for the workshop, but there was no method of enforcement and nowhere to turn when issues arose.

At one point while I was attempting to resolve the issue, this community’s Project Manager told me, “Three other women signed up, but they dropped out at the last minute because they had to work. It was very strange and unexpected that you were the only woman.” I felt immediately silenced. The issue is not numbers, but instead inviting people to safe spaces and building supportive structures where people feel welcomed and not marginalized. Increasing the variety of people involved in an event is certainly a step, but it is only part of the picture. I realize now that the board members of this organization were largely embarrassed, but they could have handled my feelings in a way where I didn’t feel like their “future improvements” were silencing my very real current concerns.

Similarly, I’ve been thinking a lot about a conversation I had with some members of the German Python community a few months ago. Someone told me that Codes of Conduct are an American hegemonic device and that introducing the idea of abuse opens the community up for it, particularly in places that do not define “diversity” in the same way as Americans. This was my first exposure to this argument, and it definitely gave me a lot of food for thought, though I adamantly disagree.

In my opinion, the open-source tech community is a multicultural community and organizers and contributors have the responsibility to set their rules for participation. Mainstream Western society, which unfortunately dictates many of the social rules on the Internet, does a bad job teaching people how to interact with one another in a positive and genuine way, and going beyond “be excellent to one another, we’re all friends here!” argument helps us participate in a way in which people feel safe both on and off the Web.

At a session at the Open Knowledge Festival this week, we were discussing accessibility and realized that the Code of Conduct (called a “User Guide”) was not easily located and many participants were probably not aware of its existence. The User Guide is quite good: it points to other codes of conduct, provides clear enforcement, and emphasizes collaboration and diversity.

At the festival, accessibility was not addressed in any kind of cohesive manner: the one gender-neutral bathroom in the huge space was difficult to find, sessions were loud and noisy and often up stairs, making it impossible for anyone with any kind of hearing or mobility issue to participate, and finally, the conference organizers did not inform participants that food would not be free, causing the conference’s ticket price to increase dramatically in an expensive neighborhood in Berlin.

In many ways, I’m conflating two separate issues here (accessibility and behavior of participants at an event.) I would counter that creating a safe space is not only about behavior on the part of the participants, but also on the part of the conference organizers. Thinking about how participants interact at your event not only has to do with how people interact with one another, but also how people interact with the space. A commitment to accessibility and “diversity” hinges upon more than words and takes concerted and long term action. It may mean choosing a smaller venue or limiting the size of the conference, but it’s not impossible, and incredibly important. It also doesn’t have to be expensive!  A small hack that I appreciated at Ada Camp and Open Source Bridge was a quiet chill out room. Being able to escape from the hectic buzz was super appreciated.

Ashe Dryden writes compellingly about the need for better Codes of Conduct and the impetus to not only have events be a reflection of what a community looks like, but also where they want to see them go. As she writes,

I worry about the conferences that are adopting codes of conduct without understanding that their responsibility doesn’t end after copy/pasting it onto their site. Organizers and volunteers need to be trained about how to respond, need to educate themselves about the issues facing marginalized people attending their events, and need to more thoughtfully consider their actions when responding to reports.

Dryden’s  Code of Conduct 101 and FAQ should be required reading for all event organizers and Community Managers. Codes of Conduct remove the grey areas surrounding appropriate and inappropriate behavior and allow groups to set the boundaries for what they want to see happening in their communities. In my opinion, there should not only be a Code of Conduct, but also an accessibility statement that collaboratively outlines what the organizers are doing to make the space accessible and inclusive and addresses and invites concerns and edits.  In her talk at the OKFestival, Penny pointed out that accessibility and inclusion actually makes things better for everyone involved in an event. As she said, “No one wants to sit in a noisy room! For you, it may be annoying, but for me it’s impossible.”

Diversity is not only about getting more women in the room, it is about thinking intersectionally and educating oneself so that all people feel welcome regardless of class, race, physicality, or level of education. I’ve had the remarkable opportunity to go to conferences all over the world this year, and the spaces that have made an obvious effort to think beyond “We have 50% women speakers!” are almost immediately obvious. I felt safe and welcomed at Open Source Bridge and Ada Camp. From food I could actually eat to lanyards that indicated comfort with photography to accessibility lanes, the conference organizers were thoughtful, available, and also kind enough that I could approach them if I needed anything or wanted to talk.

From now on, unless I’m presented a Code of Conduct that is explicit in its enforcement, defines harassment in a comprehensive manner, makes accessibility a priority, and provides trained facilitators to respond to issues, you can count me out of your event.

We can do better in protecting our friends and communities, but change can only begin internally. I am a Community Manager because we get together to educate ourselves and each other as a collaborative community of people from around the world. We should feel safe in the communities of practice that we choose, whether that community is the international Python community, or a local soccer league, or a university. We have the power to change our surroundings and our by extension our future, but it will take a solid commitment from each of us.

Events will never be perfect, but I believe that at least in this respect, we can come damn close.

[Source: Planet Mozilla]

Mozilla WebDev Community: Beer and Tell July 2014

Once a month, web developers across the Mozilla community get together to share what side projects or cool stuff we’ve been working on in our spare time. This monthly tribute is known as “Beer and Tell”.

There’s a wiki page listing the presenters and links to what they’re showing off and useful side information. There’s also a recording on Air Mozilla of the meeting.

Pomax: RGBAnalyse and nrAPI

This month Pomax had two projects to show. The first, RGBAnalyse, is a JavaScript library that generates histographical data about the colors in an image. Originally created so he could sort ink colors by hue, the library not only generates the data, but also generates images (available as data-uris) of histograms using that data.

The second project Pomax shared was nrAPI, a Node.js-based REST API for a website for learning Japanese: nihongoresources.com. The API lets you search for basic dictionary info, data on specific Kanji, sound effects, and Japanese names. Search input is accepted in English, Romaji, Hiragana, Katakana, or Kanji.

HTML result from nrAPI search for "tiger".

HTML result from nrAPI search for “tiger”.

Bill Walker: Photo Mosaic via CSS Multi-Column

Next, bwalker shared his personal birding photo site, and talked about a new photo layout he’s been playing with that uses a multi-column layout via CSS. The result is an attractive grid of photos of various sizes without awkward gaps, that can also be made responsive without the use of JavaScript. bwalker also shared the blog post that he learned the technique from.

Dean Johnson: MusicDownloader

deanj shared MusicDownloader, a Python-based program for downloading music from SoundCloud, Youtube, Rdio, Pandora, and HypeScript. The secret sauce is in the submodules, which implement the service-specific download code.

Chris Lonnen: Alonzo

Lastly, lonnen shared alonzo, a Scheme interpreter written in Haskell as part of a mad attempt to learn both languages at the same time. It uses Parsec to implement parsing, and so far implements binary numeric operations, basic conditionals, and even rudimentary error checking. The development roughly follows along “Write Yourself a Scheme in 48 Hours” by Johnathan Tang.

Sample Runs of alonzo


Thanks to all of our presenters for sharing! If you’re interested in attending or presenting at the next Beer and Tell, subscribe to the dev-webdev mailing list! The wiki page and connection info is shared a few days before each meeting.

See you next month!

[Source: Planet Mozilla]

David Boswell: Community Building Stories

One of Mozilla’s goals for 2014 is to grow the number of active contributors by 10x. For the first half of the year, the Community Building team has been supporting other teams as they connect more new contributors to their projects.

Everyone on the team recently blogged about their experience supporting projects. The stories below show different stages in the lifecycle of communities and show how we’re helping projects progress through the phases of starting, learning, scaling and then sustaining communities.

We’ve learned a lot from these experiences that will help us complete the goal in the second half of the year. For example, the Geolocation pilot event in Bangalore will be a template for more events that will connect more people to the Location Services project.

Photo courtesy of  Galaxy Kadiyala

Photo courtesy of Galaxy Kadiyala

These are just a few of the stories of community building though. There are many other blog posts to check out and even a video Dia made about how contributors made the Web We Want video available in 29 different languages.

dia_video_poster

I’d love to hear what you’ve been doing to connect with more contributors and to hear about what you’ve learned. Feel free to leave links to your stories in the comments below.


[Source: Planet Mozilla]

Selena Deckelmann: My recent op-ed published about Portland and startups

I was featured in the Portland Business Journal last Friday! I wrote an essay on startups and the experiences of women in the Portland tech community that have caused me to not refer women into startups for jobs unless the startups are run by fellow PyLadies.

Some excerpts:

It takes more than one CEO’s alleged behavior to cause 56 percent of women to leave technology related fields by mid-career, according to a Harvard Business Review study. That’s twice the rate that men leave the tech industry.

After all, 63 percent of women in STEM industries (science, technology engineering and math) have experienced sexual harassment, according to a 2008 study.

I can’t recommend that women work for startups in Portland.

Startup funders should keep holding executives accountable. Company cultures grow from the seeds planted by their leaders.

These companies need [qualified HR, skilled with workforce diversity issues], and our tech leaders should demand it.

Read the whole thing at the Portland Business Journal’s site!

[Source: Planet Mozilla]

Mike Ratcliffe: View DOM Events in Firefox Developer Tools

I recently realized that support for inspection of DOM events is very poor in pretty much all developer tools. Having seen Opera Dragonflies implementation some time ago I liked the way you could very easily see the scope of an event.

I have used a similar design to add DOM event inspection to Firefox Developer Tools. The event icons are visible in the markup view and if you click on them you can see information about the event including it's handler. If you click the debugger icon at the top left of the popup it will take you to that handler in the debugger.

Visual Events in Firefox Developer Tools Visual Events in Firefox Developer Tools

Whilst developing this feature I noticed that my workflow changed considerably. I found myself repeatedly looking at the event handlers attached to e.g. a button, clicking the debug icon, adding a breakpoint and clicking the button.

We hope this feature will be useful to you. If you have any idea how we can improve this feature then please let us know via our feedback channel or Bugzilla.

[Source: Planet Mozilla]

Doug Belshaw: FirefoxOS v2.0 is possibly the easiest-to-use smartphone operating system I’ve experienced

My father’s always been a fairly early adopter of technology. He happily uses a device that wirelessly connects his golf club to his iPhone, for example. My mother? Not so much. Until this weekend she was still sporting an old Nokia feature phone. She kind of wanted a smartphone, but didn’t want the complexity, nor the expense.

FirefoxOS v2.0

Meanwhile, I’ve been using a Geeksphone Peak smartphone recently. It’s not the latest FirefoxOS device (that would be the Flame), but it’s a significant step up from last year’s Geeksphone Keon. I’ve been using the pre-release channel of v2.0 of FirefoxOS, which is a departure from previous versions. Whereas they were similar in look and feel to Android, FirefoxOS v2.0 is different.

Every weekend, we go over to my parents’ house for Sunday lunch. Yesterday, we got talking about technology and I showed my parents my FirefoxOS device. One thing led to another, and (because all of my stuff was backed up) I wiped the phone, transferred my mother’s contacts, and swapped SIM cards. My wife gave her some tips, and then we drove off into the sunset with her Nokia phone.

I don’t think I would have felt comfortable leaving her without her old phone to revert back to if I was giving her an Android device or iPhone. There’s something so simple yet so powerful about Firefox v2.0; I’m happy to use it myself and hand it over to other, more technophobic people. Yes, I understand that I’m a Mozilla employee fully invested in the mission, but those who know me understand I also don’t say positive things about specific technologies without good reason.

According to the roadmap, today’s the day that FirefoxOS v2.0 becomes feature complete. There’s some really nice features in there too, like WebRTC (imagine Skype/FaceTime, but just using web technologies), edge gestures (something I really missed from my old Nokia N9) and Sync. If you haven’t had a chance to try one out, I’d take a look at FirefoxOS device over the coming months. The operating system is currently being tested on tablets and TVs and means, of course, smart devices without the usual vendor lock-in.


Note: the screenshots are from this post as I forgot to take some before wiping the phone and lending it to my mother!

[Source: Planet Mozilla]

Michael Verdi: Localized screencasts perform better – go figure

Planet Mozilla viewers – you can watch this video on YouTube.

I created this video about bookmarks for Firefox 29. It’s in English and has closed captions for a few languages, including German. But you can see from this audience retention data that German speakers don’t watch the video as much as English speakers.
english

So, with Kadir‘s help, I made a German version (above). You can see that this video performs much better in German speaking locales. Of course this is what we expected but it’s cool to see how plainly it shows up.
german

Note: Rewinding and re-watching can result in values higher than 100%.

[Source: Planet Mozilla]

Hub Figuière: Going to Guadec

For the first time since 2008, when it was in Istanbul, I'm coming to Guadec. This time it is in Strasbourg, France. Thanks to a work week scheduled just before in Paris.

I won't present anything this year, but I hope to be able to catch up a bit more with the Gnome community. I was already at the summit last fall, as it was being held in Montréal, but Guadec participation is usually broader and wider.

[Source: Planet Mozilla]

Hannah Kane: Maker Party Engagement: Week 1

Maker Party is here!

Last week Geoffrey sent out the Maker Party Marketing Plan and outlined the four strategies we’re using to engage the community in our annual campaign to teach the web.

Let’s see how we’re doing in each of those four areas.

First, some overall stats:

  • Events: 541 as of this writing (with more to be uploaded soon)
  • Hosts: 217 as of this writing
  • Expected attendees: 25,930 as of this writing
  • Contributors: See Adam’s post
  • Traffic: see the image below, which shows traffic to Webmaker.org during the last month. The big spike at the end of June/early July corresponds to the launch of the snippet. You can see another smaller spike at the launch of Maker Party itself.

Screen Shot 2014-07-20 at 9.29.00 AM

——————————————————————–

Engagement Strategy #1: PARTNER OUTREACH

  • # of confirmed event partners: 200 as of this writing
  • # of confirmed promotional partners: 61 as of this writing

We can see from analytics on the RIDs that Web 2.0 Labs/Learning Revolution and National 4H are the leading partners in terms of generating traffic to Webmaker.org. Links attributed to them generated 140 and 68 sessions, respectively.

Additionally, we saw blog posts from these partners:

——————————————————————–

Engagement Strategy #2: ACTIVE MOZILLIANS

  • Appmaker trainings happened at Cantinas in MozSpaces around the world last Thursday. Waiting to hear a tally of how many Mozillians were engaged through those events.
  • You’ve probably seen the event reports on the Webmaker listserve from Reps and Mentors around the world who are throwing Maker Parties.
  • Hives are in full effect! Lots of event uploads this week from the Hive networks.

Note re: metrics—though there’s evidence of a lot of movement within this strategy, I’m not quite sure how to effectively measure it. Would love to brainstorm with others.

——————————————————————–

Engagement Strategy #3: OWNED MEDIA

  • Snippet: The snippet has generated nearly 300M impressions, ~610K clicks, and ~33,500 email sign-ups to date. We now have a solid set of baseline data for the initial click-through rate, and will shift our focus to learning as much as we can about what happens after the initial click. We are working on creating several variants of the most successful icon/copy combination to avoid snippet fatigue. Captured email addresses will be a part of an engagement email campaign moving forward.
  • Mozilla.org: The Maker Party banner went live on July 16 in EN, FR, DE, and es-ES. So far there’s been no correlative spike in traffic, but it’s too early to draw any conclusions about its effectiveness.

——————————————————————–

Engagement Strategy #4: EARNED MEDIA

Our partners at Turner4D have set up several interviews for Mark and Chris as well as Mozillians in Uganda and Kenya.

Radio

Print

English:

Indonesian:

German:

Spanish:

Importantly, Maker Party was included in a Dear Colleague Letter to 435 members of the U.S. Congress this week.

What are the results of earned media efforts?

None of the press we’ve received so far can be directly correlated with a bump in traffic. Because press, when combined with social media and word of mouth, can increase general brand awareness of Mozilla and Maker Parties, one of the data points we are tracking is traffic coming from searches for brand terms like “webmaker” and “maker party.” The graph below shows a spike in that kind of searching the day before the launch, followed by a return to more average levels.

Screen Shot 2014-07-20 at 10.13.35 AM
SOCIAL:

We do not consider social media to be a key part of our strategy to draw in contributors, but it is a valuable supplement to our other efforts, as it allows us to amplify and respond to the community voice.

You can see a big spike in mentions on this #MakerParty trendline: trendline

See #MakerParty tweets here: https://twitter.com/search?q=%23makerparty&src=typd

Some highlights:

tweet1 tweet2 tweet3 tweet4 tweet5 tweet6 tweet7That’s all for this week. Stay tuned. The analysis will get deeper as we collect more data.


[Source: Planet Mozilla]

Will Kahn-Greene: Input status: July 20th, 2014

Summary

This is the status report for development on Input. I publish a status report to the input-dev mailing list every couple of weeks or so covering what was accomplished and by whom and also what I'm focusing on over the next couple of weeks. I sometimes ruminate on some of my concerns. I think one time I told a joke.

Last status report was at the end of June. This status report covers the last few things we landed in 2014q2 as well as everything we've done so far in 2014q3.

Development

Landed and deployed:

  • 6ecd0ce [bug 1027108] Change default doc theme to mozilla sphinx (Anna Philips)
  • 070f992 [bug 1030526] Add cors; add api feedback get view
  • f6f5bc9 [bug 1030526] Explicitly declare publicly-visible fields
  • c243b5d [bug 1027280] Add GengoHumanTranslater.translate; cleanup
  • 3c9cdd1 [bug 1027280] Add human tests; overhaul Gengo tests
  • ff39543 [bug 1027280] Add support for the Gengo sandbox
  • 258c0b5 [bug 1027280] Add test for get_balance
  • 44dd8e5 [bug 1027280] Implement Gengo Human push_translations
  • 35ae6ec [bug 1027280] Clean up API code
  • a7bf90a [bug 1027280] Finish pull_translations and tests
  • c9db147 [bug 1027286] Gengo translation system status
  • f975f3f [bug 1027291] Implement spot Gengo human translation
  • f864b6b [bug 1027295] Add translation_sync cron job
  • c58fd44 [bug 1032226] en-GB should copyover, too
  • 7480f87 [bug 1032226] Tweak the code to be more defensive
  • 7ac1114 [bug 1032571] CSRF exempt the API
  • ac856eb [bug 1032571] Fix tests to catch csrf issues in the api
  • 74e8e09 [bug 1032967] Handle unsupported language pairs
  • 74a409e [bug 1026503] First pass at vagrantification
  • a7a440f Continued working on docs; ditched hacking howto
  • 44e702b [bug 1018727] Backfill translations
  • 69f9b5b Fix date_end issue
  • e59d4f6 [bug 1033852] Better handle unsupported src languages
  • cc3c4d7 Add list of unsupported languages to admin
  • 32e7434 [bug 1014874] Fix translate ux
  • 672abba [bug 1038774] Hide responses from hidden products
  • e23eca5 Fix a goof in the last commit
  • 6f78e2e [bug 947767] Nix authentication for API stuff
  • a9f2179 Fix response view re: non-existent products
  • e4c7c6c [Bug 1030905] fjord feedback api tests for dates (Ian Kronquist)
  • 0d8e024 [bug 935731] Add FactoryBoy
  • 646156f Minor fixes to the existing API docs
  • f69b58b [bug 1033419] Heartbeat backend prototype
  • f557433 [bug 1033419] Add docs for heartbeat posting

Landed, but not deployed:

  • 7c7009b [bug 935731] Switch all tests to use FactoryBoy
  • 2351fb5 Generate locales so ubuntu will quite whining (Ian Kronquist)

Current head: 7ea9fc3

High-level

At a high level, this is:

  1. Landed automated Gengo human translation and a bunch of minor fixes to make it work more smoothly.
  2. Reworked how we build development environments to use vagrant. This radically simplifies the instructions and should make it a lot easier for contributors to build a development environment. This in turn should lead to more people working on Input.
  3. Fixed a bug where products marked as "hidden" were still showing up in the dashboard.
  4. Implemented a GET API for Input responses. (https://wiki.mozilla.org/Firefox/Input/Dashboards_for_Everyone)
  5. Implemented the backend for the Heartbeat prototype. (https://wiki.mozilla.org/Firefox/Input/Heartbeat)
  6. Also, I'm fleshing out the Input section in the wiki complete with project plans. (https://wiki.mozilla.org/Firefox/Input)

Over the next two weeks

  1. Continue fleshing out project plans for in-progress projects on the wiki.
  2. Gradient sentiment and product picker work.

What I need help with

  1. We have a new system for setting up development environments. I've tested it on Linux. Ian has, too (pretty sure he's using Linux). We could use some help testing it on Windows and Mac OSX.

Do the instructions work on Windows? Do the instructions work on Mac OSX? Are there important things the instructions don't cover? Is there anything confusing?

http://fjord.readthedocs.org/en/latest/getting_started.html

  1. I'm changing the way I'm managing Fjord development. All project plans will be codified in the wiki. A rough roadmap of which projects are on the drawing board, in-progress, completed, etc is also on the wiki. I threw together a structure for all of this that I think is good, but it could use some review.

Do these project plans provide useful information? Are there important questions that need answering that the plans do not answer?

https://wiki.mozilla.org/Firefox/Input

If you're interested in helping, let me know! We hang out on #input on irc.mozilla.org and there's the input-dev mailing list.

I think that covers it!

[Source: Planet Mozilla]

Nigel Babu: OKFestival - Berlin, 2014

For the first time, I actually attended the OKFestival. I didn’t get to attend many sessions, but the conversations I’ve had are spectacular.

The first surprise was meeting malev. A couple of years ago, we both worked together on the Ubuntu project. Now, he’s an Open News Fellow and I work at Open Knowledge. The FOSS world is truely small :-)

I finally got to meet Christie! I’ve heard of Christie since right before she started at Mozilla, when I first heard of Open Source Bridge, and later she started at Mozilla Webdev, where I was closely involved back then.

Georg came over to say hi on Tuesday. When I realized that he was in Uganda for the Mozfest East Africa, I introduced him to Ketty who was also there, leading to an interesting conversation and great connection.

George Sattler works for XVT solutions in Australia and is our partner. He is fairly certain that I don’t sleep ;) We’ve been having conversations over email for quite a long time and it was great to meet George in person.

The Venue

It’s been a long time since I’ve met Adam Green, the editor of Public Domain Review. It was nice catching up with him. Also, Joris! I hadn’t seen him since he moved on from OKF :-)

I haven’t met Riju since he’s moved to Delhi and I met him in Berlin! Totally random and great running into him :)

The last I met Kaustubh was at Pranesh’s farewell party in October (?). We had a good time catching up.

Folks from local groups across OKF. As a part-time system, I talk to most of the OKF community folks at some point through RT. Additionally, I was going around asking feedback for the sysadmin team. It was great for me to put a face to names and I suspect vice versa as well.

The usual suspects who were great to meet, are of course, my lovely teammates. It’s nice to meet in person, grab a drink, and talk.

Congratulations again to Bea, Megan, Lou, and Naomi for making OKFestival happen!

Cutting the Cake
[Source: Planet Mozilla]

Kevin Ngo: Poker Sessions #19 to #27 - Downswing Just keep Hulking through the drops.

Since returning from a three-week personal/work/family trip in Florida, things have not gone too hot. Busted out of a $5K (-$80), out of a freeroll (-$60), a $3K (-$90), a couple of small $300s (-$90), a couple of $1500s (-$100), and a couple more $5Ks (-$160). That totals for a -$480 dip. Though I try not to be results-oriented.

The first couple of tournaments were rust. I chalk the rest up to "that's tournament poker". MTTs are naturally swingy, despite playing pretty solid. Most bust outs were failed steals in the higher all-in-or-fold blind levels, a couple were suckouts. But I won't recite every bust-out hand.

Though I have been doing pretty solid live, I have been getting undisciplined in my online poker play. It's time to hit the books and tighten up. Harrington has a solid guide for preflop play that I need to freshen up upon.

After doing some bookkeeping, my poker bankroll after after 27 sessions is +$3272.

Sessions Conclusions

  • Went Well: improving on hand-reading, taking less marginal lines, super patience
  • Mistakes: some loose push-fold play, thinking limpers always have marginal hands
  • Get Better At: studying on late push-fold play, whether it needs to tighten up
  • Profit: -$480
[Source: Planet Mozilla]

Soledad Penades: “Just turn it into a node module”, and other mantras Edna taught me

Here are the screencast and the write up on the talk I gave today at One-Shot London NodeConf. As usual I diverge a bit from the initial narrative and I forgot to mention a couple of topics I wanted to highlight, but I have had a horrible week and considering that, this has turned out pretty well!

It was great to meet so many interesting people at the conference and seeing old friends again! Also now I’m quite excited to hack with a few things. Damn (or yay!).

Slides and code for the slides.

Creative technologist

creative technologist

A little more than a year ago I started working at an agency in London as a creative technologist. It was fun to be trying out all the new things and building cool experiences with them. So whatever new APIs were out there, we would come up with some idea to use them. Sometimes that would be because the client wanted to be “first” in using that browser functionality so theirs would be the first website that did X and that would drive a ton of traffic to them, and other times we just wanted to do some cool experiment for our own website so we would master that technology and also attract traffic and get new clients–that’s how it works in advertising.

We mostly used JavaScript on the front-end (except some old remains from the past built in Flash). On the server, we used a mix of Python and node.js. Most of the Python was actually for setting up websites and authentication in Google App Engine which is what they used to host the websites (so the server wouldn’t go down when the site got popular), but for all the real time communication we used Socket.io running in node.js because it was way easier than Python alternatives.

I had been toying with node.js on and off for a while already, but this was when I started to actually believe that I could just use JS for everything… even build scripts!

The worst habits

But I had the worst habits after a life of many other programming languages, and although I wasn’t consciously aware of them, I knew that something didn’t feel quite right. But what? It was hard to say when everyone else in your environment was OK with whatever solution you came up with as long as it worked well enough and you delivered on time.

This was also part of why I joined Mozilla–I was feeling super guilty that we were building things that were not using standards and they would break in the future, or even worse, set bad precedents and habits. I didn’t want to contribute to a web where the cool experiences only worked on one browser, but I wanted to contribute to make the web more powerful and expressive.

My buddy Jen

cult leader

A couple of months later, I was in Mountain View for my onboarding week. I was disoriented and jetlagged, but also very excited and scared! And more people on from my future team were in Mountain View for that week. One of them, Jen, sent me a message pretty much just as I checked in into the hotel: “hey, ready for dinner?”

I hadn’t even met her or spoken to her during the interviews, so I wasn’t even sure how did she look like in real life. I washed my face and told to myself in the mirror that it was 6pm, NOT 2 AM as my body was trying to protest, and that everything was OK. And went downstairs to meet her.

She was waiting in the parking lot. I had only seen a tiny picture of her with shorter hair in her (very prolific) github profile, but honestly, who waits alone in the parking lot of a hotel in Mountain View at 6pm on a Sunday? We said “hi” and she said: “it’s my birthday today, and I want to have a nice dinner”. Would I oppose such thing? No!

Jen’s a philosopher, therefore she philosophises

best github account ever

We walked to Castro street, spending some time admiring the peculiarities of the business on either side of El Camino Real. You might have a chiropractice, a massage parlour, a beauty salon, and a gun seller, all side by side. It was all very amusing. Then we went into a Moroccan restaurant, and we had to prove our age by showing our respective passports, which was amusing again.

So we started talking about the food first… how it didn’t taste like anything Moroccan I had had before, and whether the Moroccan food I had had either in London or Paris could be considered authentic, or more authentic than this–based on closeness to the “source”. But you can only analyse food so much, so we switched to other topics and at the end of dinner she was telling me how she was going to build a distributed blog system but she would do it into a module so she could then reuse it for other things because it would be generic enough and… with the wine and the jetlag I was really finding it hard to follow.

She continued discussing and elaborating on these ideas during the week. She was hacking on a module that would be called meatspace. And she excitedly told me: “to empty it you would just call the flush method!”. Not being a native English speaker, I didn’t understand what ‘meatspace’ meant initially, so the whole naming seemed disgusting to me. Flushing meat drown the drain to empty the stored messages! GROSS.

rtcamera

My first task to get acquainted with the Mozilla process was to port or rewrite one of my existing Android apps to the web. WebRTC support was coming up soon in Firefox, so I opted to build something similar to Nerdstalgia. And I built something, and then I had Jen code review it. I didn’t know it initially, but she had been appointed my “Moz buddy”, to guide me around and introduce me to the usual processes.

She would keep mentioning this notion of “making things into modules” but I still didn’t quite get it. I regularly extracted my code into libraries, right? So why all this insistence on modules?

Modules

Intrigued (or sparkled) by her insistence, I started extracting modules out of rtcamera. The first one was the Animated_GIF library, and then gumHelper. This was quite providential because a while later she was exploring this idea of a distributed multiuser chat that could use your webcam to capture your facial expression, and because we had these libraries handy, adding them to the stack was very easy. You might, or might not, have heard of a thing called Meatspace Chat

Frida is my muse
Frida is one of Jen’s cats. This is Frida after seeing comma-first JS, according to Potch.

Something that really helped me “get the node way” were her comments on how to pass parameters, and callbacks. This was one of the reasons why I felt my node.js code didn’t feel ‘right’, and that’s because I was using my own adhoc style which was the result of having programmed in many languages but not being profficient in node.js – so I wasn’t using their idioms, and my code felt weird when using other people’s code–even system modules such as fs.

She insisted a lot on using a standard “callback signature” — the function(err, result) style which honestly drove me a bit nuts at the beginning. But if you’re using the same style throughout the code you can exchange node modules or even send the callback to another function, and it’s easier than if you have different signatures on each library.

Simplify

Another of her lessons was that if you were trapped in callback hell, maybe you were doing it conceptually wrong. Maybe you should be simplifying your code so you do calls in a different way. I am not totally sure of what I like most yet–promises or just plain callbacks, but I see her point, and oftentimes I would bring a Promises library to my project, then after refactoring the code so it would be suitable for promises I find that I don’t really need them at all.

Likewise for user interfaces–most of the time we agonise over how pretty something has to look but the fact is that the site might not provide any value to a user and so they leave. We should focus on the experience first, and then maybe we can make the things prettier.

npm

Another important lesson: it’s totally OK to use other people’s modules! Believe it or not, my initial node code almost didn’t use anyone’s modules, and if I used external code that was because I downloaded the code and copied it to the src folder AND loaded it with a local require path. npm? What was that thing?

Funny fact: I was watching RealtimeConf’s live stream because Jen was doing a talk on all the experiments she had been working on and was going to present Meatspace chat for the first time, and so I stayed for a while. And then I learnt a nice lesson not from her directly but from Max Ogden on his RealtimeConf talk: you don’t need to care about the code style in a node module, as long as it works. If it doesn’t, either you replace that module with another one, or you fix it. And again, if you’re using the same signature, this is easier to accomplish–you’re just replacing “boxes”.

Having tests is incredibly useful for this. Jen often writes the module tests first and then the module code itself–so she defines the API that way and sees if it feels natural and intuitive.

At this point there’s no UI code yet, just tests that make sure the logic works. If you can run the same test input data through two different modules you can make sure they do what they are supposed to do, and can also compare their performance or whatever nit is that makes you want to switch modules. This again is way easier if your function signatures are “standard node.js style”.

Browserify

I heard about this one actually the same week I started in Mozilla. But I was unable to appreciate its awesomeness–I was just used to applying Google Closure compiler to my concatenated code and calling it a day. I either concatenated the code with a nice

cat *.js > all.js

or maybe I built Python scripts that would read certain code files and join them together before either invoking a local version of the closure compiler (which was Java), or would send the uncompressed code to the Google Closure service to hopefully get it back, if there weren’t any errors.

But I quickly forgot about it. About some time later, I was looking into building a somewhat complex thing for my JSConf.EU project, and somehow Jen reminded me about it.

This project was a fun one, because I was using everything on it: server side node with Express.js serving the slides, which were advanced according to the music player with virtual Web Audio based instruments that was running on the browser, plus I had Socket.io sending data to and from a hardware MIDI device through OSC. So there were lots of data to pass back and forth and lots of modules to orchestrate, and including script tags in the browser wasn’t going to work well if I wanted to stay sane. So all the front-end code was launched using Browserify.

Another favourite anecdote in this project is that I in fact extracted several modules out of it, with tests and all, that I then reused in other projects. So I was taking advantage of my own work later on, and I like to think that when this happens, more people might find it useful too.

Multiplatform

Finally–and this is not a thing that only Jen taught me– one of the reasons why we like node a lot in Mozilla is because it makes it so much easier to write code that works everywhere. And with that I mean different platforms. As long as node can run in that system, the code can run.

This is very important because most of the times developers assume that everyone else is running the same system they are developing on, and in rich countries this often means the assumption that people use Macs, but that’s not the case everywhere, and certainly not in poorer countries. They use Windows or Linux, and setting up a toolchain to have a working make tool with which to run Makefile is either harder or not for the faint of mind.

So in this context, distributing build scripts written for node.js is way more democratic and helps us get our code to more people than if we used Make or Bash scripts.

And here comes one of my favourite stories–when one of the meatspacers sent me a PR to improve the build system of one of the libraries I had extracted and make it use node.js with uglify instead of the bash script I was using. That simple gesture enabled all the Windows developers to contribute to my module!

Conclusions

  • node modularity is awesome, but it takes time to ‘get it’. It’s OK to not to get things at the first try.
  • If you can find a mentor, it will help you ‘get it’ faster.
  • Otherwise maybe hang on the proper channels (irc, user groups, blogs, confs), study other people’s code and BE A SPONGE (a nodesponge?)
  • Don’t be afraid to experiment but also use the safety harness: tests!
  • And don’t be afraid to publish your code – maybe someone else will find it useful OR give you advice to improve it!

flattr this!

[Source: Planet Mozilla]

Botond Ballo: Trip Report: C++ Standards Committee Meeting in Rapperswil, June 2014

Summary / TL;DR

Project Status
C++14 On track to be published late 2014
C++17 A few minor features so far, including for (elem : range)
Networking TS Ambitious proposal to standardize sockets based on Boost.ASIO
Filesystems TS On track to be published late 2014
Library Fundamentals TS Contains optional, any, string_view and more. Progressing well, expected early 2015
Library Fundamentals TS II Follow-up to Library Fundamentals TS; will contain array_view and more. In early stage
Array Extensions TS Completely stalled. No proposal related to runtime-sized arrays/objects currently has consensus
Parallelism TS Progressing well; expected 2015
Concurrency TS Executors and resumable functions need more work
Transactional Memory TS Progressing well; expected 2015-ish
Concepts (“Lite”) TS Progressing well; expected 2015
Reflection A relatively full-featured compile-time introspection proposal was favourably reviewed. Might target a TS or C++17
Graphics Moving forward with a cairo-based API, to be published in the form of a TS
Modules Clang has a complete implementation for C++, plan to push it for C++17

Introduction

Last week I attended another meeting of the ISO C++ Standards Committee in Rapperswil, Switzerland (near Zurich). This is the third Committee meeting I have attended; you can find my reports about the previous two here (September 2013, Chicago) and here (February 2014, Issaquah). These reports, particularly the Issaquah one, provide useful context for this post.

With C++14′s final ballot being still in progress, the focus of this meeting was the various language and library Technical Specifications (TS) that are planned as follow-ups to C++14, and on C++17.

C++14

C++14 is currently out for its “DIS” (Draft International Standard) ballot (see my Issaquah report for a description of the procedure for publishing a new language standard). This ballot was sent out at the end of the Issaquah meeting, and will close mid-August. If no national standards body poses an objection by then – an outcome considered very likely – then the standard will be published before the end of the year.

Since a ballot was in progress, no changes to the C++14 draft were made during the Rapperswil meeting.

C++17, and what’s up with all these TS’s?

ISO procedure allows the Committee to publish two types of documents:

  • International Standards (IS). These are official standards with stringent backwards-compatibility requirements.
  • Technical Specifications (TS) (formerly called Technical Reports (TR)). These are for things that are not quite ready to be enshrined into an official standard yet, and have no backwards-compatibility requirements. Specifications contained in a TS may or may not be added, possibly with modifications, into a future IS.

C++98 and C++11 are IS’s, as will be C++14 and C++17. The TS/TR form factor has, up until recently, only been used once by the Committee: for TR1, the 2005 library spec that gave us std::tr1::shared_ptr and other library enhancements that were then added into C++11.

Since C++11, in an attempt to make the standardization process more agile, the Committee has been aiming to place significant new language and library features into TS’s, published on a schedule independent of the IS’s. The idea is that being in a TS allows the feature to gain user and implementation experience, which the committee can then use to re-evaluate and possibly revise the feature before putting it into an IS.

As such, much of the standardization work taking place concurrently with and immediately after C++14 is in the form of TS’s, the first wave of which will be published over the next year or so, and the contents of which may then go into C++17 or a subsequent IS, as schedules permit.

Therefore, at this stage, only some fairly minor features have been added directly to C++17.

The most notable among them is the ability to write a range-based for loop of the form for (elem : range), i.e. with the type of the element omitted altogether. As explained in detail in my Issaquah report, this is a shorthand for for (auto&& elem : range) which is almost always what you want. The Evolution Working Group (EWG) approved this proposal in Issaquah; in Rapperswil it was also approved by the Core Working Group (CWG) and voted into C++17.

Other minor things voted into C++17 include:

  • static_assert(condition), i.e. with the message omitted. An implementation-defined message is displayed.
  • auto var{expr}; is now valid and equivalent to T var{expr}; (where T is the deduced type)
  • A template template parameter can now be written as template <...> typename Name in addition to template <...> class Name, to mirror the way a type template parameter can be written as typename Name in addition to class Name
  • Trigraphs (an obscure feature that allowed certain characters, such as #, which are not present on some ancient keyboards, to be written as a three-character sequence, such as ??=) were removed from the language

Evolution Working Group (EWG)

As with previous meetings, I spent most of time in the Evolution Working Group, which spends its time looking at proposals for new language features that either do not fall into the scope of any Study Group, or have already been approved by a Study Group. There was certainly no lack of proposals at this meeting; to EWG’s credit, it got through all of them, at least the ones which had papers in the official pre-Rapperswil mailing.

Incoming proposals were categorized into three rough categories:

  • Approved. The proposal is approved without design changes. They are sent on to CWG, which revises them at the wording level, and then puts them in front of the committee at large to be voted into whatever IS or TS they are targeting.
  • Further Work. The proposal’s direction is promising, but it is either not fleshed out well enough, or there are specific concerns with one or more design points. The author is encouraged to come back with a modified proposal that is more fleshed out and/or addresses the stated concerns.
  • Rejected. The proposal is unlikely to be accepted even with design changes.

Accepted proposals:

  • Opening two nested namespaces with namespace A::B { in addition to namespace A { namespace B {
  • “Making return explicit”. This means that if a class A has an explicit constructor which takes a parameter of type B, then, in a function whose return type is A, it is valid to write return b; where b has type B. (Currently, one has to writen return A(b);.) The idea is to avoid repetition; a very common use case is A being std::unique_ptr<T> for some T, and B being T*. This proposal was relatively controversial; it passed with a weak consensus in EWG, and was also discussed in the Library Evolution Working Group (LEWG), where there was no consensus for it. I was surprised that EWG passed this to CWG, given the state of the consensus; in fact, CWG indicated that they would like additional discussion of it in a forum that includes both EWG and LEWG members, before looking at it in CWG.
  • A preprocessor feature for testing for the presence of a (C++11-style) attribute: __has_cpp_attribute(attribute_name)
  • A couple that I already mentioned because they were also passed in CWG and voted into C++17 at the same meeting:

Proposals for which further work is encouraged:

  • A proposal to make C++ more friendly for embedded systems development by reducing the overhead of exception handling, and further expanding the expressiveness of constexpr. EWG encouraged the author to gather people interested in this topic and form a Study Group to explore it in more detail.
  • A proposal to convey information to the compiler about aliasing, via attributes. This is intended to be an improvment to C99′s restrict.
  • A way to get the compiler to, on an opt-in basis, generate equality and comparison operators for a structure/class. Everyone wanted this feature, but there were disagreements about how terse the syntax should be, whether complementary operators should be generated automatically (e.g. != based on ==), how exactly the compiler should implement the operators (particularly for comparison – questions of total order vs. weaker orders came up), and how mutable members should be handled.
  • A proposal for a per-platform portable C++ ABI. I will talk about this in more detail below.
  • A way to force the compiler to omit padding between two structure fields
  • A way to specify that a class should be converted to another type in auto initialization. That is, for a class C, to specify that in auto var = c; (with c having type C), the type of var should actually be some other type D. The motivating use here is expression templates; in Matrix X, Y; auto Z = X * Y; we want the type of Z to be Matrix even if the type of X * Y is some expression template type. EWG liked the motivation, but the proposal tried to modify the semantics of template parameter deduction for by-value parameters so as to remain consistent with auto, and EWG was concerned that this was starting to encroach on too many areas of the language. The author was encouraged to come back with a more limited-scope proposal that concerned auto initialization only.
  • Fixed-size template parameter packs (typename...[K]) , and packs where all parameters must be of the same type (T...[N]). EWG liked the idea, but had some concerns about syntactic ambiguities. The proposal also inspired an offshoot idea of subscripting parameter packs (e.g. Pack[0] gives you the first parameter), to avoid having to use recursion to iterate over the parameters in many cases.
  • Expanding parameter packs as expressions. Currently, if T is a parameter pack bound to parameters A, B, and C, then T... expands to A, B, C; this expansion is allowed in various contexts where a comma-separated list of things (types or expressions, as the parameters may be) is allowed. The proposal here is to allow things like T +... which would expand to A + B + C, which would be allowed in an expression context.

Rejected proposals:

  • Objects of runtime size. This would have allowed a pure library implementation of something like std::dynarray (and allowed users to write similar classes of their own), but it unfortunately failed to gain consensus. More about this in the Array Extensions TS section.
  • Additional flow control mechanisms like break label;, continue label; and goto case value;. EWG thought these encouraged hard-to-follow control flow.
  • Allowing specifiers such as virtual, static, override, and some others, to apply to a group of members the way acess specifiers (private: etc.) currently do. The basis for rejection here was that separating these specifiers from the members they apply to can make class definitions less readable.
  • Specializing an entity in a different namespace without closing the namespaces you are currently in. Rejected because it’s not clear what would be in scope inside the specialization (names from the entity’s namespace, the current namespace, or both).
  • <<< and >>> operators for circular bit-shifts. EWG felt these would be more appropriate as library functions.
  • A rather complicated proposal for annotating template parameter packs that claimed to be a generalization of the proposal for fixed-size template parameter packs. Rejected because it would have made the language much more complicated, while the benefit would mostly have been for template metaprogramming; also, several of the use cases can be satisfied with Concepts instead.
  • Throwing an exception on stack exhaustion. The implementers in the room felt this was not implementable.

I should also mention the proposal for named arguments that Ehsan and I have been working on. We did not prepare this proposal in time to get it into the pre-Rapperswil mailing, and as such, EWG did not look at it in Rapperswil. However, I did have some informal discussions with people about it. The main concerns were:

  • consideration for constructor calls with {...} syntax and, by extension, aggregate initialization
  • the relationship to C99 designated initializers (if we are covering aggregate initializtion, then these can be viewed as competing syntaxes)
  • most significantly: parameter names becoming part of a library’s interface that library authors then have to be careful not to break

Assuming we are able to address these concerns, we will likely write an updated proposal, get it into the pre-Urbana mailing (Urbana-Champaign, Illinois is the location of the next Committee meeting in November), and present it at the Urbana meeting.

Portable C++ ABI

One of the most exciting proposals at this meeting, in my opinion, was Herb Sutter’s proposal for a per-platform portable C++ ABI.

A per-platform portable ABI means that, on a given platform (where “platform” is generally understood to a mean the combination of an operating system, processor family, and bitness), binary components can be linked together even if they were compiled with different compilers, or different versions of the same compiler, or different compiler options. The current lack of this in C++ is, I think, one of C++’s greatest weaknesses compared to other languages like C or Java.

More specifically, there are two aspects to ABI portability: language and library. On the language side, portability means that binary components can be linked together as long as, for any interface between two components (for example, for a function that one component defines and the other calls, the interface would consist of the function’s declaration, and the definitions of any types used as parameters or return type), the two components are compiled from identical standard-conforming source code for that interface. On the library side, portability additionally means that interfaces between components can make use of standard library types (this does not follow solely from the language part, because different compilers may not have identical source code for their standard library types).

It has long been established that it is out of scope for the C++ Standard to prescribe an ABI that vendors should use (among other reasons, because parts of an ABI are inherently platform-specific, and the standard cannot enumerate every platform and prescribe something for each one). Instead, Herb’s proposal is that the standard codify the notions of a platform and a platform owner (an organization/entity who controls the platform); require that platform owners document an ABI (in the case of the standard library, this means making available the source code of a standard library implementation) which is then considered the platform ABI; and require compiler and standard library vendors to support the platform ABI to be conformant on any given platform.

In order to ease transitioning from the current world where, on a given platform, the ABI can be highly dependent on the compiler, the compiler version, or even compiler options, Herb also proposes some mechanisms for delineating a portion of one’s code which should be ABI-portable, and therefore compiled using the platform ABI. These mechanisms are a new linkage (extern "abi") on the language side, and a new namespace (std::abi, containing the same members as std) on the library side. The idea is that one can restrict the use of these mechanisms to code that constitutes component interfaces, thereby achieving ABI portability without affecting other code.

This proposal was generally well-received, and certainly people agreed that a portable ABI is something C++ needs badly, but some people had concerns about the specific approach. In particular, implementers were uncomfortable with the idea of potentially having to support two different ABI’s side-by-side in the same program (the platform ABI for extern "abi" entities, and the existing ABI for other entities), and, especially, with having two copies of every library entity (one in std and one in std::abi). Other concerns about std::abi were raised as well, such as the performance penalty arising from having to convert between std and std::abi types in some places, and the duplication being difficult to teach. It seemed that a modified proposal that concerned the language only and dropped std::abi would have greater consensus.

Array Extensions TS

The Array Extensions TS was initally formed at the Chicago meeting (September 2013) when the committee decided to pull out arrays of runtime bound (ARBs, the C++ version of C’s VLAs) and dynarray, the standard library class for encapsulating them, out of C++14 and into a TS. This was done mostly because people were concerned that dynarray required too much compiler magic to implement. People expressed a desire for a language feature that would allow them to implement a class like dynarray themselves, without any compiler magic.

In Issaquah a couple of proposals for such a language feature were presented, but they were relatively early-stage proposals, and had various issues such as having quirky syntax and not being sufficiently general. Nonetheless there was consensus that a library component is necessary, and we’d rather not have ARBs at all than get them without a library component to wrap them into a C++ interface.

At this meeting, a relatively fully fleshed-out proposal was presented that gave programmers a fairly general/flexible way to define classes of runtime size. Unfortunately, it was considered a complex change that touches many parts of the language, and there was no consensus for going forward with it.

As a result, the Array Extensions TS is completely stalled: ARBs themselves are ready, but we don’t want them without a library wrapper, and no proposal for a library wrapper (or mechanism that would enable one to be written) has consensus. This means that the status quo of not being able to use VLAs in C++ (unless a vendor enables C-stle VLAs in C++ as an extension) will remain for now.

Library / Library Evolution Working Groups (LWG and LEWG)

Library work at this meeting included the Library Fundamentals TS (and its planned follow-up, Library Fundamentals II), the Filesystems TS and Networking TS (about which I’ll talk in the SG 3 and SG 4 sections below), and reviewing library components of other projects like the Concurrency TS.

The Library Fundamentals TS was in the wording review stage at this meeting, with no new proposals being added to it. It contains general library utilities such as optional, any, string_view, and more; see my Issaquah report for a full list. The current draft of the TS can be found here. At the end of the meeting, the Committee voted to send out the TS for its first national body ballot, the PDTS (Preliminary Draft TS) ballot. This ballot concludes before the Urbana meeting in November; if the comments can be addressed during that meeting and the TS sent out for its second and final ballot, the DTS (Draft TS) ballot, it could be published in early 2015.

The Committee is also planning a follow-up to the Library Fundamentals TS, called the Library Fundamentals TS II, which will contain general utilities that did not make it into the first one. Currently, it contains one proposal, a generalized callable negator; another proposal, containing library facilities for contract programming, got rejected for several reasons, one of them being that it is expected to be obsoleted in large part by reflection. Proposals under consideration to be added include:

Study Groups

SG 1 (Concurrency)

SG 1 focuses on two areas, concurrency and parallelism, and there is one TS in the works for each.

I don’t know much about the Parallelism TS other than it’s in good shape and was sent out for its PDTS ballot at the end of the meeting, which could lead to publication in 2015.

The status of the Concurrency TS is less certain. Coming into the Rapperswil meeting, the Concurrency TS contained two things: improvements to std::future (notably a then() method for chaining a second operation to it), and executors and schedulers, with resumable function being slated for addition.

However, the pre-Rapperswil mailing contained several papers arguing against the existing designs for executors and resumable functions, and proposing alternative designs instead. These papers led to executors and schedulers being removed from the Concurrency TS, and resumable functions not being added, until people come to a consensus regarding the alternative designs. I’m not sure whether publication of the Concurrency TS (which now contains only the std::future improvements) will proceed, leaving executors and resumable functions for a follow-up TS, or be stalled until consensus on the latter topics is reached.

For resumable functions, I was in the room during the technical discussion, and found it quite interesting. The alternative proposal is a coroutines library based on Boost.Coroutine. The two proposals differ both in syntax (new keywords async and await vs. the magic being hidden entirely behind a library interface), and implementation technique for storing the local variables of a resumable function (heap-allocated “activation frames” vs. side stacks). The feedback from SG 1 was to disentangle these two aspects, possibly yielding a proposal where either syntax could be matched with either implementation technique.

There are also other concurrency-related proposals before SG 1, such as ostream buffers, latches and barries, shared_mutex, atomic operations on non-atomic data, and a synchronized wrapper. I assume these will go into either the current Concurrency TS, or a follow-up TS, depending on how they progress through the committee.

SG 2 (Modules)

Modules is, in my opinion, one of the most sorely needed features in C++. They have the potential of increasing compile times by an order of magnitude or more, thus bringing compile-time performance more in line with more modern languages, and of solving the combinatorial explosion problem, caused by macros, that hampers the development of powerful tooling such as automated refactoring.

The standardization of modules has been a slow process for two reasons. First, it’s an objectively difficult problem to solve. Second, the solution shares some of the implementation difficulties of the export keyword, a poorly thought-out feature in C++98 that sought to allow separate compilation of templates; export was only ever implemented by one compiler (EDG), and the implementation process revealed flaws that led not only to other compilers not even bothering to implement it, but also to the feature being removed from the language in C++11. This bad experience with export led people to be uncertain about whether modules are even implementable. As a result, while some papers have been written proposing designs for modules (notably, one by Daveed Vandevoorde a couple of years back, and one by Gabriel Dos Reis very recently, what everyone has really been holding their breaths for was an implementation (of any variation/design), to see that one was possible.

Google and others have been working on such an implementation in Clang, and I was very excited to hear Chandler Carruth (head of Google’s team working on Clang) report that they have now completed it! As this work was completely very recently prior to the meeting, they did not get a chance to write a paper to present at this meeting, but Chandler said one will be forthcoming for the next meeting.

EWG held a session on Modules, where Gaby presented his paper, and the Clang folks discussed their implementation. There were definitely some differences between the two. Gaby’s proposal came across as more idealistic: this is what a module system should look like if you’re writing new code from scratch with modules. Clang’s implementation is more practical: this is how we can start using modules right away in existing codebases. For example, in Gaby’s proposal, a macro defined in a module is not visible to an importing module; in Clang’s implementation, it is, reflecting the reality that today’s codebases still use macros heavily. As another example, in Gaby’s proposal, a module writer must say which declarations in the module file are visible to importing modules by surrounding them in an export block (not to be confused with the failed language feature I talked about above); in Clang’s implementation, a header file can be used as a module without any changes, using an external “module map” file to tell the compiler it is a module. Another interesting design question that came up was whether private members of a class exported from a module are “visible” to an importing module (in the sense that importing modules need to be recompiled if such a private member is added or modified); in Clang’s implementation, this is the case, but there would certainly be value in avoiding this (among other things, it would obsolete the laborious “Pimpl” design pattern).

The takeaway was that, while everyone wants this feature, and everyone is excited about there finally being an implementation, several design points still need to be decided. EWG deemed that it was too early to take any polls on this topic, but instead encouraged the two parties (the Clang folks, and Gaby, who works for Microsoft and hinted at a possible Microsoft implementation effort as well) to collaborate on future work. Specifically, EWG encourages that the following papers be written for Urbana: one about what is common to the various proposals, and one or more about the remaining deep technical issues. I eagerly await such future work and papers.

SG 3 (Filesystems)

At this meeting, the Library Working Group finished addressing the ballot comments for the Filesystem TS’s PDTS ballot, and sent out the TS for the final “DTS” ballot. If this ballot is successful, the Filesystems TS will be published by the end of 2014.

Beman (the SG 3 chair) stated that SG 3 will entertain new filesystem-related proposals that build upon the Filesystems TS, targetting a follow-up Filesystems TS II. To my knowledge no such proposals have been submitted so far.

SG 4 (Networking)

SG 4 had been working on standardizing basic building blocks related to networking, such as IP addresses and URIs. However, these efforts are stalled.

As a result, the LEWG decided at this meeting to take it upon itself to advance networking-related proposals, and they set their sights on something much more ambitious than IP addresses and URIs: a full-blown sockets library, based on Boost.ASIO. The plan is basically to pick up Chris Kohlhoff’s (the author of ASIO) 2007 paper proposing ASIO for standardization, incorporating changes to update the library for C++11, as well as C++14 (forthcoming). This idea received very widespread support in LEWG; the group decided to give people another meeting to digest the new direction, and then propose adopting these papers as the working paper for the Networking TS in Urbana.

This change in pace and direction might seem radical, but it’s in line with the committee’s philosophy for moving more rapidly with TS’s. Adopting the ASIO spec as the initial Networking TS working paper does not mean that the committee agrees with every design decision in ASIO; on the contrary, people are likely to propose numerous changes to it before it gets standardized. However, having a working paper will give people something to propose changes against, and thus facilitate progress.

SG 5 (Transactional Memory)

The Transactional Memory TS is progressing well through the committee. CWG began reviewing its wording at this meeting, and referred one design issue to EWG. (The issue concerned functions that were declared to be neither transaction-safe nor transaction-unsafe, and defined out of line (so the compiler cannot compute the transaction safety from the definition). The state of the proposal coming into the discussion was that for such functions, the compiler must assume that they can be either transaction-safe or transaction-unsafe; this resulted in the compiler sometimes needing to generate two versions of some functions, with the linker stripping out the unused version if you’re lucky. EWG preferred avoiding this, and instead assuming that such functions are transaction-unsafe.) CWG will continue reviewing the wording in Urbana, and hopefully sendout the TS for its PDTS ballot then.

SG 6 (Numerics)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 7 (Reflection)

SG 7 met for an evening session and looked at three papers:

  • The latest version of the source code information capture proposal, which aims to replace the __LINE__, __FILE__, and __FUNCTION__ macros with a first-class language feature. There was a lot of enthusiasm for this idea in Issaquah, and now that it’s been written up as a paper, SG 7 is moving on it quickly, deciding to send it right on to LEWG with only minor changes. The publication vehicle – with possible choices being Library Fundamentals TS II, a hypothetical Reflection TS, or C++17 – will be decided by LEWG.
  • The type member property queries proposal by Andrew Tomazos. This is an evolution of an earlier proposal which concerned enumerations only, and which was favourably reviewed in Issaquah; the updated proposal extends the approach taken for enumerations, to all types. The result is already a quite featureful compile-time introspection facility, on top of which facilities such as serialization can be built. It does have one significant limitation: it relies on forming pointers to members, and thus cannot be used to introspect members to which pointers cannot be formed – namely, references and bitfields. The author acknowledged this, and pointed out that supporting such members with the current approach would require language changes. SG 7 did not deem this a deal-breaker problem, possibly out of optimism that such language changes would be forthcoming if this facility created a demand for them. Overall, the general direction of the proposal had basically unanimous support, and the author was encouraged to come back with a revised proposal that splits out the included compile-time string facility (used to represent names of members for introspection) into a separate, non-reflection-specific proposal, possibly targeted at Library Fundamentals II. The question of extending this facility to introspect things other than types (notably, namespaces, although there was some opposition to being able to introspect namespaces) also came up; the consensus here was that such extensions can be proposed separately when desired.
  • A more comprehensive static reflection proposal was looked at very, very briefly (the author was not present to speak about it in detail). This was a higher-level and more featureful proposal than Tomazos’ one; the prevailing opinion was that it is best to standardize something lower-level like Tomazos’ proposal first, and then consider standardizing higher-level libraries that build on it if appropriate.

SG 8 (Concepts)

The Concepts TS (formerly called “Concepts Lite”, but then people thought “Lite” was too informal to be in the title of a published standard) is still in the CWG review stage. Even though CWG looked at it in Issaquah, and the author and project editor, Andrew Sutton, revised the draft TS significantly for Rapperswil, the feature touches many areas of the language, and as such more review of the wording was required; in fact, CWG spent almost three full days looking at it this time.

The purpose of a CWG review of a new language feature is twofold: first, to make sure the feature meshes well with all areas of the language, including interactions that the author and EWG may have glossed over; and second, to make sure that the wording reflects the author’s intention accurately. In fulfilling the first objective, CWG often ends up making minor changes to a feature, while staying away from making fundamental changes to the design (sometimes, recommendations for more significant changes do come up during a CWG review – these are run by EWG before being acted on).

In the case of the Concepts TS, CWG made numerous minor changes over the course of the review. It was initially hoped that there would be time to revise the wording to reflect these changes, and put the reivsed wording out for a PDTS ballot by the end of the meeting, but the changes were too numerous to make this feasible. Therefore, the PDTS ballot proposal was postponed until Urbana, and Andrew has until then to implement the wording changes.

SG 9 (Ranges)

SG 9 did not meet in Rapperswil, but does plan to meet in Urbana, and I anticipate some exciting developments in Urbana.

First, I learned that Eric Niebler, who in Issaquah talked about an idea for a Ranges proposal that I thought was very exciting (I describe it in my Issaquah report), plans to write up his idea as a proposal and present it in Urbana.

Second, one of the attendees at Rapperswil, Fabio Fracassi, told me that he is also working on a (different) Ranges proposal that he plans to present in Urbana as well. I’m not familiar with his proposal, but I look forward to it. Competition is always healthy when it comes up early-stage standards proposal / choosing an approach to solving a problem.

SG 10 (Feature Test)

I didn’t follow the work of SG 10 very closely. I assume that, in addition to the __has_cpp_attribute() preprocessor feature that I mentioned above in the EWG section, they are kept busy by the flow of new features being added into working papers, for each of which they have to decide whether the feature deserves a feature-test macro, and if so standardize a name for one.

Clark (the SG 10 chair) did mention that the existence of TS’s complicates matters for SG 10, but I suppose that’s a small price to pay for the benefits of TS’s.

SG 12 (Undefined Behaviour)

Did not meet in Rapperswil, but plans to meet in Urbana.

SG 13 (Human Interaction, formerly “Graphics”)

SG 13 met for a quarter-day session, during which Herb presented an updated version of the proposal for a cairo-based 2D drawing API. A few interesting points came up in the discussion:

  • The interface being standardized is a C++ wrapper interface around cairo that was generated using a set of mechanical transformation rules applied to cairo’s interface. The transformation rules are not being standardized, only their result (so the standard interface can potentially diverge from cairo in the future, though presumably this wouldn’t be done without a very good reason).
  • I noted that Mozilla is moving away from cairo, in part due to inefficiencies caused by cairo being a stateful API (as explained here). It was pointed out that this inefficiency is an issue of implementation (due to cairo using a stateless layer internally), not one of interface. This is a good point, although I’m not sure how much it matters in practice, as standard library vendors are much more likely to ship cairo’s implementation than write their own. (Jonathan Wakely said so about libstdc++, but I think it’s very likely the case for other vendors as well.)
  • Regarding the tradeoff between a nice interface and high performance, Herb said the goal was to provide a nice interface while providing as good of a performance as we can get, without necesarily squeezing every last ounce of performance.
  • The library has the necessary extension points in place to allow for uses such as hooking into drawing onto a drawing surface of an existing library, such as a Qt canvas (with the cooperation of the existing library, cairo, and the platform, of course).

The proposal is moving forward: the authors are encouraged to come back with wording.

TS Content Guidelines

One mildly controversial issue that came to a vote in the plenary meeting at the end of the week, is the treatment of modifications/extensions to standard library types in a Technical Specification. One group held that the simplest thing to do for users is to have the TS specify modifications to the types in std:: themselves. Another group was of the view that, in order to make life easier for a third-party library vendor to implement a TS, as well as to ensure that it remains practical for a subsequent IS to break the TS if it needs to, the types that are modified should be cloned into an std::experimental:: namespace, and the modifications applied there. This second view prevailed.

Next Meeting

The next Committee meeting (“Urbana”) will be at the University of Illinois at Urbana-Champaign, the week of November 3rd.

Conclusion

The highlights of the meeting for me, personally, were:

  • The relevation that clang has completed their modules implementation, that they will be pushing it for C++17, and that they are fairly confident that they will be able to get it in. The adoption of a proper modules system has the potential to revolutionize compilation speeds and the tooling landscape – revolutions that C++ needs badly.
  • Herb’s proposal for a portable C++ ABI. It is very encouraging to see the committee, which has long held this issue to be out of its scope, looking at a concrete proposal for solving a problem which, in my opinion, plays a significant role in hampering the use of C++ interfaces in libraries.
  • LEWG looking at bringing the entire Boost.ASIO proposal into the Networking TS. This dramatically brings forward the expected timeframe of having a standard sockets library, compared to the previous approach of standardizing first URIs and IP addresses, and then who knows what before finally getting to sockets.

I eagerly await further developments on these fronts and others, and continue to be very excited about the progress of C++ standardization.


[Source: Planet Mozilla]

Adam Lofting: 2014 Contributor Goals: Half-time check-in

We’re a little over halfway through the year now, and our dashboard is now good enough to tell us how we’re doing.

TL;DR:

  • The existing trend lines won’t get us to our 2014 goals
    • but knowing this is helpful
    • and getting there is possible
  • Ask less: How do we count our contributors?
  • Ask more: What are we doing to grow the contributor community? And, are we on track?

Changing the question

Our dashboard now needs to move from being a project to being a tool that helps us do better. After all, Mozilla’s unique strength is that we’re a community of contributors and this dashboard, and the 2014 contributor goal, exist to help us focus our workflows, decisions and investments in ways that empower the community. Not just for the fun of counting things.

The first half of the year focused us on the question “How do we count contributors?”. By and large, this has now been answered.

We need to switch our focus to:

  1. Are we on track?
  2. What are we doing to grow the contributor community?

Then repeating these two question regularly throughout the year, and adjusting our strategy as we go.

Are we on track?

Wearing my cold-dispassionate-metrics hat, and not my “I know how hard you’re all working already” hat, I have to say no (or, not yet).

I’m going to look at this team by team and then look at the All Mozilla Foundation view at the end.

Your task, for each graph below is to take an imaginary marker pen and draw the line for the rest of the year based on the data you can see to date. And only on the data you can see to-date.

  • What does your trend line look like?
  • Is it going to cross the dotted target line in 2014?

OpenNews

Screen Shot 2014-07-18 at 19.48.44

Based on the data to-date, I’d draw a flat line here. Although there are new contributors joining pretty regularly, the overall trend is flat. In marketing terms there is ‘churn’; not a nice term, but a useful one to talk about the data. To use other crass marketing terms, ‘retention’ is as important as ‘acquisition’ in changing the shape of this graph.

Science Lab

Screen Shot 2014-07-18 at 19.49.55

Dispassionately here, I’d have to draw a trend line that’s pointing slightly down. One thing to note in this view is that the Science Lab team have good historic data, so what we’re seeing here is the result of the size of the community in early 2013, and some drop-off from those people.

Appmaker

Screen Shot 2014-07-18 at 19.50.57

This graph is closest to what we want to see generally, i.e. pointing up. But I’ll caveat that with a couple of points. First, taking the imaginary marker pen, this isn’t going to cross the 2014 target line at the current rate. Second, unlike the Science Lab and OpenNews data above, much of this Appmaker counting is new. And when you count things for the first time, a 12 month rolling active total has a cumulative effect in the first year, which increases the appearance of growth, but might not be a long term trend. This is because Appmaker community churn won’t be a visible thing until next year when people could first drop out of the twelve month active time-frame.

Webmaker

Screen Shot 2014-07-18 at 19.51.47

This graph is the hardest to extend with our imaginary marker pen, especially with the positive incline we can see as Maker Party kicks off. The Webmaker plan expects much of the contributor community growth to come from the Maker Party campaign, so a steady incline was not the expectation across the year. But, we can still play with the imaginary marker pen.

I’d do the following exercise: In the first six months, active contributors grew by ~800 (~130 per month), so assuming that’s a general trend (big assumption) and you work back from 10k in December you would need to be at ~9,500 by the end of September. Mark a point at 9,500 contributors above the October tick and look at the angle of growth required throughout Maker Party to get there. That’s not impossible, but it’s a big challenge and I don’t have any historic data to make an informed call here.

Note: the Appmaker/Webmaker separation here is a legacy thing from the beginning of the year when we started this project. The de-duped datastore we’re working on next will allow us to graph: Webmaker Total > Webmaker Tools > Appmaker as separate graphs with separate goals, but which get de-duped and roll-up into the total numbers above, and in turn roll-up into the Mozilla wide total at areweamillionyet.org – this will better reflect the actual overlaps.

Metrics

[ 0 contributors ]

The MoFo metrics team currently has zero active volunteer contributors, and based on the data available to date is trending absolutely flat. Action is required here, or this isn’t going to change. I also need to set a target. Growing 0 by 10X doesn’t really work. So I’ll aim for 10 volunteer contributors in 2014.

All Mozilla Foundation

Screen Shot 2014-07-18 at 19.52.40

Here we’re adding up the other graphs and also adding in ~900 people who contributed to MozFest in October 2013. That MozFest number isn’t counted towards a particular team and simply lifts the total for the year. There is no trend for the MozFest data because all the activity happened at once, but if there wasn’t a MozFest this year (don’t worry, there is!) in October the total line would drop by 900 in a single week. Beyond that, the shape of this line is the cumulative result of the team graphs above.

In Q3, we’ll be able to de-dupe this combined number as there are certainly contributors working across MoFo teams. In a good way, our total will be less that the sum of our parts.

Where do we go from here?

First, don’t panic. Influencing these trend lines is not like trying to shift a nation’s voting trends in the next election. Much of this is directly under our control, or if not ‘control’, then it’s something we can strongly influence. So long as we work on it.

Next, it’s important to note that this is the first time we’ve been able to see these trends, and the first time we can measure the impact of decisions we make around community building. Growing a community beyond a certain scale is not a passive thing. I’ve found David Boswell’s use of the term ‘intentional’ community building really helpful here. And much more tasteful than my marketing vocabulary!

These graphs show where we’re heading based on what we’re currently doing, and until now we didn’t know if we were doing well, or even improving at all. We didn’t have any feedback mechanism on decisions we’d make relating to community growth. Now we do.

Trend setting

Here are some initial steps that can help with the ‘measuring’ part of this community building task.

Going back to the marker pen exercise, take another imaginary color and rather than extrapolate the current trend, draw a positive line that gets you to your target by the end of the year. This doesn’t have to be a straight line; allow your planned activity to shape the growth you want to see. Then ask:

  • Where do you need to be in Aug, Sep, Oct, Nov, Dec?
  • How are you going to reach each of these smaller steps?

Schedule a regular check-in that focuses on growing your contributor community and check your dashboard:

  • Are your current actions getting you to your goals?
  • What are the next actions you’re going to take?

The first rule of fundraising is ‘Ask for money’. People often overlook this. By the same measure, are you asking for contributions?

  • How many people are you asking this week or month to get involved?
  • What percentage of them do you expect to say yes and do something?

Multiply those numbers together and see if it that prediction can get you to your next step towards your goal.

Asking these questions alone won’t get us to our goals, but it helps us to know if our current approach has the capacity to get there. If it doesn’t we need to adjust the approach.

Those are just the numbers

I could probably wrap up this check-in from a metrics point of view here, but this is not a numbers game. The Total Active Contributor number is a tool to help us understand scale beyond the face-to-face relationships we can store in our personal memories.

We’re lucky at Mozilla that so many people already care about the mission and want to get involved, but sitting and waiting for contributors to show up is not going to get us to our goals in 2014. Community building is an intentional act.

Here’s to setting new trends.

[Source: Planet Mozilla]

More News


Sponsored by: