Mozilla DevCenter
oreilly.comSafari Books Online.Conferences.
advertisement

Sponsored Developer Resources

Atom 1.0 Feed RSS 1.0 Feed RSS 2.0 Feed

Related O'Reilly Books





What Is Firefox What Is Firefox
Brian King provides a brief look at Firefox's origins and evolution, and then dives into its support for web standards like CSS and XML, its debugging and extension capabilities, and some cool new features in the upcoming 1.5 release. If you're considering a switch to Firefox, this article may help make the decision for you.


Mozilla as a Development Platform: An Interview with Axel Hecht  Axel Hecht is a member of Mozilla Europe's board of directors, and a major contributor to the Mozilla project. At O'Reilly's European Open Source Convention (October 17-20), Dr. Hecht will be talking about Mozilla as a development platform. O'Reilly Network interviewed Dr. Hecht to find out if the long-held dream of Mozilla as a development platform was about to come true.   [O'Reilly Network]

A Firefox Glossary  Brian King, with some help from Nigel McFarlane, covers everything from about:config to "zool" in this fun, fact-filled Firefox glossary. It's by no means exhaustive, but you'll find references to specific chapters or hacks throughout the glossary to Nigel's book, Firefox Hacks. When you're ready to dig deeper, check out his book.   [O'Reilly Network]

Important Notice for Mozilla DevCenter Readers About O'Reilly RSS and Atom Feeds  O'Reilly Media, Inc. is rolling out a new syndication mechanism that provides greater control over the content we publish online. Here's information to help you update your existing RSS and Atom feeds to O'Reilly content.  [Mozilla DevCenter]

Hacking Firefox  This excerpt from Firefox Hacks shows you how to use overlays (essentially hunks of UI data) to make something you want to appear in the Firefox default application, perhaps to carry out a particular function of your extension. For example, you might want to add a menu item to the Tools menu to launch your extension. Overlays allow existing Firefox GUIs to be enhanced.   [O'Reilly Network]

Mozile: What You See is What You Edit  Most modern browsers don't allow you to hit "edit" and manipulate content as easily as you view it, WYSIWYG-style. Mozile, which stands for Mozilla Inline Editor, is a new Mozilla plug-in for in-browser editing. This article by Conor Dowling provides an overview of Mozile and what in-browser editing means.
  [ Mozilla DevCenter]

The Future of Mozilla Application Development  Recently, mozilla.org announced a major update to its development roadmap. Some of the changes in the new document represent a fundamental shift in the direction and goals of the Mozilla community. In this article, David Boswell and Brian King analyze the new roadmap, and demonstrate how to convert an existing XPFE-based application into an application that uses the new XUL toolkit. David and Brian are the authors of O'Reilly's Creating Applications with Mozilla.   [Mozilla DevCenter]

Remote Application Development with Mozilla, Part 2  In their first article, Brian King, coauthor of Creating Applications with Mozilla, and Myk Melez looked at the benefits of remote application development using Mozilla technologies such as XUL and web services support. In this article, they present a case study of one such application, the Mozilla Amazon Browser, a tool for searching Amazon's catalogs.   [Mozilla DevCenter]

Remote Application Development with Mozilla  This article explores the uses for remote XUL (loaded from a Web server), contrasts its capabilities with those of local XUL (installed on a user's computer), explains how to deploy remote XUL, and gives examples of existing applications.   [Mozilla DevCenter]

Mozdev.org Made Easy  Now that mozilla.org is about to release Mozilla 1.2 and Netscape has come out with the latest version of their own Mozilla-based browser, Netscape 7, this is a great time to see what other people are building with Mozilla's cross-platform development framework. Here's a little history about, and a roadmap to, mozdev.org.   [Mozilla DevCenter]

XML Transformations with CSS and DOM  Mozilla permits XML to be rendered in the browser with CSS and manipulated with DOM. If you're already familiar with CSS and DOM, you're more than halfway to achieving XML transformations in Mozilla. This article demonstrates how to render XML in the browser with a minimum of CSS and JavaScript.   [Mozilla DevCenter]

Roll Your Own Browser  Here's a look at using the Mozilla toolkit to customize, or even create your own browser.   [Mozilla DevCenter]

Let One Hundred Browsers Bloom  In this article, David Boswell, coauthor of Creating Applications with Mozilla surveys some of the more interesting, and useful, Mozilla-based browsers available now.   [Mozilla DevCenter]

Using the Mozilla SOAP API  With the release of Mozilla 1.0, the world now has a browser that supports SOAP natively. This article shows you how Web applications running in Mozilla can now make SOAP calls directly from the client without requiring a browser refresh or additional calls to the server.   [Web Development DevCenter]





Today's News
September 30, 2014

Dustin J. Mitchell: fwunit: Unit Tests for your Network

I find your lack of unit tests ... disturbing

It's established fact by now that code should be tested. The benefits are many:

  • Exercising the code;
  • Reducing ambiguity by restating the desired behavior (in the implementation, in the tests, and maybe even a third time in the documentation); and
  • Verifying that the desired behavior remains unchanged when the code is refactored.

System administrators are increasingly thinking of infrastructure as code and reaping the benefits of testing, review, version control, collaboration, and so on. In the networking world, this typically implies "software defined networking" (SDN), a substantial change from the typical approach to network system configuration.

At Mozilla, we haven't taken the SDN plunge yet, although there are plans in the works. In the interim, we maintain very complex firewall configurations by hand. Understanding how all of the rules fit together and making manual changes is often difficult and error-prone. Furthermore, after years of piece-by-piece modifications to our flows, the only comprehensive summary of our network flows are the firewall configurations themselves. And those are not very readable for anyone not familiar with firewalls!

The difficulty and errors come from the gap between the request for a flow and the final implementation, perhaps made across several firewalls. If everyone -- requester and requestee -- had access to a single, readable document specifying what the flows should look like, then requesets for modification could be more explicit and easier to translate into configuration. If we have a way to verify automatically that the firewall configurations match the document, then we can catch errors early, too.

I set about trying to find a way to implement this. After experimenting with various ways to write down flow definitions and parse them, I realized that the verification tests could be the flow document. The idea is to write a set of tests, in Python since it's the lingua franca of Mozilla, which can be read by both the firewall experts and the users requesting a change to the flows. To change flows, change the tests -- a diff makes the request unambiguous. To verify the result, just run the tests.

fwunit

I designed fwunit to support this: unit tests for flows. The idea is to pull in "live" flow configurations and then write tests that verify properties of those configurations. The tool supports reading Juniper SRX configurations as well as Amazon AWS security groups for EC2 instances, and can be extended easily. It can combine rules from several sources (for example, firewalls for each datacenter and several AWS VPCs) using a simple description of the network topology.

As a simple example, here's a test to make sure that the appropriate VLANs have access to the DeployStudio servers:

def test_install_build():
    rules.assertPermits(
        test_releng_scl3 + try_releng_scl3 + build_releng_scl3,
        deploystudio_servers,
        'deploystudio')

The rules instance there is a compact representation of all allowed network flows, deduced from firewall and AWS configurations with the fwunit command line tool. The assertPermits method asserts that the rules permit traffic from the test, try, and build VLANs to the deploystudio servers, using the "deploystudio" application. That all reads pretty naturally from the Python code.

At Mozilla

We glue the whole thing together with a shell script that pulls the tests from our private git repository, runs fwunit to get the latest configuration information, and then runs the tests. Any failures are reported by email, at which point we know that our document (the tests) doesn't match reality, and can take appropriate action.

We're still working on the details of the process involved in changing configurations -- do we update the tests first, or the configuration? Who is responsible for writing or modifying the tests -- the requester, or the person making the configuration change? Whatever we decide, it needs to maximize the benefits without placing undue load on any of the busy people involved in changing network flows.

Benefits

It's early days, but this approach has already paid off handsomely.

  • As expected, it's a readable, authoritative, verifiable account of our network configuration. Requirements met -- aweseome!
  • With all tests in place, netops can easily "refactor" the configurations, using fwunit to verify that no expected behavior has changed. We've deferred a lot of minor cleanups as high-risk with low reward; easy verification should substantially reduce that risk.
  • Just about every test I've written has revealed some subtle misconfiguration -- either a flow that was requested incorrectly, or one that was configured incorrectly. These turn into flow-request bugs that can be dealt with at a "normal" pace, rather than the mad race to debug and fix that would occur later, when they impacted production operations.

Get Involved

I'm a Mozillan, so naturally fwunit is open source and designed to be useful to more than just Mozilla. If this sounds useful, please use it, and I'd love to hear from you about how I might make it work better for you. If you're interested in hacking on the software, there are a number of open issues in the github repo just waiting for a pull request.

[Source: Planet Mozilla]

Lukas Blakk: New to Bugzilla

I believe it was a few years ago, possibly more, when someone (was it Josh Matthews? David Eaves) added a feature to Bugzilla that indicated when a person was “New to Bugzilla”. It was a visual cue next to their username and its purpose was to help others remember that not everyone in the Bugzilla soup is a veteran, accustomed to our jargon, customs, and best practices. This visual cue came in handy three weeks ago when I encouraged 20 new contributors to sign up for Bugzilla. 20 people who have only recently begun their journey towards becoming Mozilla contributors, and open source mavens. In setting them loose upon our bug tracker I’ve observed two things:

ONE: The “New to Bugzilla” flag does not stay up long enough. I’ll file a bug on this and look into how long it currently does stay up, and recommend that if possible we should have it stay up until the following criteria are met:
* The person has made at least 10 comments
* The person has put up at least one attachment
* The person has either reported, resolved, been assigned to, or verified at least one bug

TWO: This one is a little harder – it involves more social engineering. Sometimes people are might be immune to the “New to Bugzilla” cue or overlook it which has resulted in some cases there have been responses to bugs filed by my cohort of Ascenders where the commenter was neither helpful nor forwarding the issue raised. I’ve been fortunate to be in-person with the Ascend folks and can tell them that if this happens they should let me know, but I can’t fight everyone’s fights for them over the long haul. So instead we should build into the system a way to make sure that when someone who is not New to Bugzilla replies immediately after a “New to Bugzilla” user there is a reminder in the comment field – something along the lines of “You’re about to respond to someone who’s new around here so please remember to be helpful”. Off to file the bugs!

[Source: Planet Mozilla]

Daniel Stenberg: A day in the curl project

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

[Source: Planet Mozilla]

Curtis Koenig: The Curtis Report 2014-09-26

So my last report failed to mention something important. There is a lot I do that is not on this report. This only covers note worthy items outside of run the business (RTB) activities. I do a good deal of bug handing, input, triage and routing to get things to the right people, remove bad/invalid or mis tagged items. Answer emails on projects and other items etc. Just general workstuff. Last week had lots of vendor stuff (as noted below) and while kind of RTB it’s usually not this heavy and we had 2 rush ones so I felt they worthy of note.

What I did this week

  • kit herder community stuff
  • [vendor redacted] communications
  • [vendor redacted] review followup
  • [vendor 2 redacted] rush review started
  • Tribe pre-planning for next month
  • [vender redacted] follow ups
  • triage security bugs
  • DerbyCon prep / registration
  • bitcoin vendor prep work
  • SeaSponge mentoring

Meetings Attended

Mon

  • impromptu [vendor redacted] review discussion
  • status meeting for [vendor redacted] security testing
  • Monday meeting

Tue

  • cloud services team (sort of)

Wed

  • impromptu [vendor redacted] standup
  • MWoS SeaSponge Weekly team meeting
  • Cloud Services Show & Tell
  • Mozillians Town Hall – Brand Initiatives (Mozilla + Firefox)
  • Web Bug Triage

Thu

  • security open mic

Fri-Sun

Non Work

  • deal with deer damage to car

[Source: Planet Mozilla]

Jordan Lund: This Week In Releng - Sept 21st, 2014

Major Highlights:

  • shipped 10 products in less than one day

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

[Source: Planet Mozilla]

Doug Belshaw: 21 emerging themes for Web Literacy Map 2.0

Over the past few weeks I’ve interviewed various people to gain their feedback on the current version of Mozilla’s Web Literacy Map. There was a mix of academics, educational practitioners, industry professionals and community members.* I’ve written up the interviews on a tumblr blog and the audio repository can be found at archive.org.

I wanted to start highlighting some of the things a good number of them talked about in terms of the Web Literacy Map and its relationship with Webmaker (and the wider Mozilla mission)

Cat eating popcorn

Introduction

I used five questions to loosely structure the interviews:

  1. Are you currently using the Web Literacy Map (v1.1)? In what kind of context?
  2. What does the Web Literacy Map do well?
  3. What’s missing from the Web Literacy Map?
  4. What kinds of contexts would you like to use an updated (v2.0) version of the Web Literacy Map?
  5. Who would you like to see use/adopt the Web Literacy Map?

How much we stuck to the questions in this order depended on the interviewee. Some really wanted to talk about their context. Others wanted to dwell on more conceptual aspects. Either way, it was interesting to see some themes emerge.

Emerging themes

I’m still synthesizing the thoughts contained within 18+ hours of audio, but here are the headlines so far…

1. The ‘three strands’ approach works well

The strands currently named Exploring / Building / Connecting seem to resonate with lots of people. Many called it out specifically as a strength of the Web Literacy Map, saying that it enables people to orient themselves reasonably quickly.

2. Without context, newbies can be overwhelmed

While many people talked about how useful the Web Literacy Map is as a ‘map of the territory’ giving an at-a-glance overview, some interviewees mentioned that the Web Literacy Map should really be aimed at mentors, educators, and other people who have already got some kind of mental model. We should be meeting end users where they are with interesting activities rather than immediately presenting them with a map that reinforces their lack of skills/knowledge.

3. Shared vocabulary is important

New literacies can be a contested area. One interviewee in particular talked about how draining it can be to have endless discussions and debates about definitions and scope. Several people, especially those using it in workshops, talked about how useful the Web Literacy Map is in developing a shared vocabulary and getting down to skill/knowledge development.

4. The ‘Connecting’ strand has some issues

Although interviewees agreed there were no ‘giant gaping holes’ in the Web Literacy Map, many commented on the third, ‘Connecting’ strand. Some mentioned that it seemed a bit too surface-level. Some wanted a more in-depth treatment of licensing issues under ‘Open Practices’. Others thought that the name ‘Connecting’ didn’t really capture what the competencies in that column are really about. Realistically, most people will be meeting the competencies in this strand through social media. There isn’t enough focus on this, nor on ‘personal branding’, thought some people.

5. Clear focus on learning through making/doing

Those interested in the pedagogical side of things zeroed in on the verb-based approach to the Web Literacy Map. They appreciated that, along with the Discover / Make / Teach flow on each competency page, users of webmaker.org are encouraged to learn through making and doing, rather than simply being tested on facts.

6. Allows other organizations to see how their work relates to Mozilla’s mission

Those using this out ‘in the field’ (especially those involved in Hive Learning Networks talked about how the Web Literacy Map is a good conversation-starter. They mentioned the ease with which most other organizations they work with can map their work onto ours, once they’ve seen it. These organizations can then use it as a sense-check to see how they fit into a wider ecosystem. It allows them to quickly understand the difference between the ‘learn to code’ movement and the more nuanced, holistic approach advocated by Mozilla.

7. It doesn’t really look like a ‘map’

Although interviewees were happy with the word ‘Map’ (much more so than the previous ‘Standard’), many thought we may have missed a trick by not actually presenting it as a map. Some thought that the Web Literacy Map is currently presented in a too clear-cut way, and that we should highlight some of the complexity. There were a few ideas how to do so, although one UX designer warned against surfacing this too much, lest we end up with a ‘plate of spaghetti’. Nevertheless, there was a feeling that riffing on the ‘map’ metaphor could lead to more of an ‘exploratory’ approach.

8. Lacking audience definition

There was a generally-positive sentiment about the Web Literacy Map structuring the Webmaker Resource section, although interviewees were a bit unsure about audience definition. The Web Literacy Map seems to be more of a teaching tool rather than a learning tool. It was suggested that we might want to give Mentors and Learners a different view. Mentors could start with the more abstract competencies, whereas the Learners could start with specific, concrete, interest-based activities. Laura Hilliger’s Web Literacy Learning Pathways prototype was mentioned on multiple occasions.

9. Why is this important?

Although the Web Literacy Map makes sense to westerners in developed countries, there was a feeling among some interviewees that we don’t currently ‘make the case’ for the web. Why is it important? Why should people pay to get online? What benefits does it bring? We need to address this question before, or perhaps during, their introduction to the competencies included in the Web Literacy Map.

10. Arbitrary separation of ‘Security’ and ‘Privacy’ competencies

At present, ‘Privacy’ is a competency under the ‘Exploring’ strand, and ‘Security’ is a competency under the ‘Connecting’ strand. However, there’s a lot of interplay, overlap, and connections between the two. Although interviewees thought that they should be addressed explicitly, there was a level of dissatisfaction with the way it’s currently approached in the Web Literacy Map.

11. Better localization required

Those I interviewed from outside North America and the UK expressed some frustration at the lack of transparency around localization. One in particular had tried to get involved, but became demotivated by a lack of response when posing suggestions and questions via Transifex. Another mentioned that it was important not to focus on translation from English to other languages, but to generate local content. The idea of badges for localization work was mentioned on more than one occasion.

12. The Web Literacy Map should be remixable

Although many interviewees approached it from different angles, there was a distinct feeling that the Web Literacy Map should somehow be remixable. Some used a GitHub metaphor to talk of the ‘main branch’ and ‘forks’. Others wanted a ‘Remix’ button next to the map in a similar vein to Thimble and Popcorn Maker resources. This would allow for multiple versions of the map that could be contextualized and localized while still maintaining a shared vocabulary and single point of reference.

13. Tie more closely to the Mozilla Mission

One of the things I wanted to find out through gentle probing during this series of interviews was whether we should consider re-including the fourth ‘Protecting’ strand we jettisoned before reaching v1.0. At the time, we thought that ‘protecting the web’ was too political and Mozilla-specific to include in what was then a Web Literacy ‘Standard’. However, a lot has changed in a year - both with Mozilla and with the web. Although I got the feeling that interviewees were happy to tie the Web Literacy Map more closely to the Mozilla Mission, there wasn’t overall an appetite for an additional column. Instead, people talked about ‘weaving’ it throughout the other competencies.

14. Use cross-cutting themes to connect news events to web literacy

When we developed the first version of the Web Literacy Map, we didn’t include ‘meta-level’ things such as ‘Identity’ and ‘storytelling’. Along with ‘mobile’, these ideas seem too large or nebulous to be distinct competencies. It was interesting, therefore, to hear some interviewees talk of hooking people’s interest via news items or the zeitgeist. The topical example given the timing of the interviewees tended to be interesting people in ‘Privacy’ and ‘Security’ via the iCloud celebrity photo leaks.

15. Develop user stories

Some interviewees felt that the Web Literacy Map currently lacks a ‘human’ dimension that we could rectify through the inclusion of some case studies showing real people who have learned a particular skill or competency. These could look similar to the UX Personas work.

16. Improve the ‘flow’ of webmaker.org for users

This is slightly outside the purview of the Web Literacy Map per se, but enough interviewees brought it up to surface it here. The feeling is that the connection between Webmaker Tools, the Web Literacy Map, and Webmaker badges isn’t clear. There should be a direct and obvious link between them. For instance, web literacy badges should be included in each competency page. Some even suggested a learner dashboard similar to the one Jess Klein proposed back in 2012.

17. Bake web literacy into Firefox

This, again, is veering away from the Web Literacy Map itself, but many interviewees mentioned how Mozilla should ‘differentiate’ Firefox within the market by allowing you to develop your web literacy skills ‘in the wild’. Some had specific examples of how this could work (“Hey, you just connected to a website using HTTPS, want to learn more?”) while others just had a feeling we should join things up a bit better.

18. Identify ‘foundational’ competencies

Although we explicitly avoided doing this with the first version of the Web Literacy Map, for some interviewees, having a set of ‘foundational’ competencies would be a plus point. It would give a starting point for those new to the area, and allow us to assume a baseline level from which the other competencies could be developed. We could also save the ‘darker’ aspects of the web for later to avoid scaring people off.

19. Avoid scope creep

Many interviewees warned against ‘scope creep’, or trying to cram too much into the Web Literacy Map. On the whole, there were lots of people I spoke to who like it just the way it is, with one saying that it would be relevant for a ‘good few years yet’. One of the valuable things about the Web Literacy Map is that it has a clear focus and scope. We should ensure we maintain that, was the general feeling. There’s also a feeling that it has a ‘strong understanding of technology’ that should be watered-down.

20. Version control

If we’re updating the Web Literacy Map, users need to know which version they’re viewing - and how to access previous versions. This is so they can know how up-to-date the current version is. We should also allow them to view previous iterations that they may have used to build a curriculum still being used by other organizations.

21. Use as a funnel to wider Mozilla projects

We currently have mozilla.org/contribute and webmaker.org/getinvolved, but some interviewees thought that we could guide people who keep selecting certain competencies towards different Mozilla areas - for example OpenNews or Open Science. The latter is also developing its own version of the Web Literacy Map, so that could be a good link. Also, even more widely, Open Hatch provide Open Source ‘missions’ that we could make use of.


*Although I was limited by my language and geographic location, I’m pretty happy with the range of views collected. Instead of a dry, laboratory-like study looking for statistical significance, I decided to focus on people I knew would have good insights, and with whom I could have meaningful conversations. Over the next couple of weeks I’m going to create a survey for community members to get their thoughts on some of the more concrete proposals I’ll make for Web Literacy Map 2.0.


Comments? Feedback? I’m @dajbelshaw on Twitter, or you can email me: doug@mozillafoundation.org.

[Source: Planet Mozilla]

Jordan Lund: This Week In Releng - Sept 7th, 2014

Major Highlights

  • big time saving in releases thanks to:
    • Bug 807289 - Use hardlinks when pushing to mirrors to speed it up

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

[Source: Planet Mozilla]

William Lachance: Using Flexbox in web applications

Over last few months, I discovered the joy that is CSS Flexbox, which solves the “how do I lay out this set of div’s in a horizontally or vertically”. I’ve used it in three projects so far:

  • Centering the timer interface in my meditation app, so that it scales nicely from a 320×480 FirefoxOS device all the way up to a high definition monitor
  • Laying out the chart / sidebar elements in the Eideticker dashboard so that maximum horizontal space is used
  • Fixing various problems in the Treeherder UI on smaller screens (see bug 1043474 and its dependent bugs)

When I talk to people about their troubles with CSS, layout comes up really high on the list. Historically, basic layout problems like a panel of vertical buttons have been ridiculously difficult, involving ridiculous hacks involving floating divs and absolute positioning or JavaScript layout libraries. This is why people write articles entitled “Give up and use tables”.

Flexbox has pretty much put an end to these problems for me. There’s no longer any need to “give up and use tables” because using flexbox is pretty much just *like* using tables for layout, just with more uniform and predictable behaviour. :) They’re so great. I think we’re pretty close to Flexbox being supported across all the major browsers, so it’s fair to start using them for custom web applications where compatibility with (e.g.) IE8 is not an issue.

To try and spread the word, I wrote up a howto article on using flexbox for web applications on MDN, covering some of the common use cases I mention above. If you’ve been curious about flexbox but unsure how to use it, please have a look.

[Source: Planet Mozilla]

Jennie Rose Halperin: Why I feel like an Open Source Failure

I presented a version of this talk at the Supporting Cultural Heritage Open Source Software (SCHOSS) Symposium in Atlanta, GA in September 2014. This talk was generously sponsored by LYRASIS and the Andrew Mellon Foundation.


I often feel like an Open Source failure.

I haven’t submitted 500 patches in my free time, I don’t spend my after-work hours rating html5 apps, and I was certainly not a 14 year old Linux user. Unlike the incredible group of teenaged boys with whom I write my Mozilla Communities newsletter and hang out with on IRC, I spent most of my time online at that age chatting with friends on AOL Instant Messenger and doing my homework.

I am a very poor programmer. My Wikipedia contributions are pretty sad. I sometimes use Powerpoint. I never donated my time to Open Source in the traditional sense until I started at Mozilla as a GNOME OPW intern and while the idea of data gets me excited, the thought of spending hours cleaning it is another story.

I was feeling this way the other day and chatting with a friend about how reading celebrity news often feels like a better choice after work than trying to find a new open source project to contribute to or making edits to Wikipedia. A few minutes later, a message popped up in my inbox from an old friend asking me to help him with his application to library school.

I dug up my statement of purpose and I was extremely heartened to read my words from three years ago:

I am particularly interested in the interaction between libraries and open source technology… I am interested in innovative use of physical and virtual space and democratic archival curation, providing free access to primary sources.

It felt good to know that I have always been interested in these topics but I didn’t know what that would look like until I discovered my place in the open source community. I feel like for many of us in the cultural heritage sector the lack of clarity about where we fit in is a major blocker, and I do think it can be associated with contribution to open source more generally. Douglas Atkin, Community Manager at Airbnb, claims that the two main questions people have when joining a community are “Are they like me? And will they like me?”. Of course, joining a community is a lot more complicated than that, but the lack of visibility of open source projects in the cultural heritage sector can make even locating a project a whole lot more complicated.

As we’ve discussed in this working group, the ethics of cultural heritage and Open Source overlap considerably and

the open source community considers those in the cultural heritage sector to be natural allies.

In his article, “Who are you empowering?” Hugh Rundle writes: (I quote this article all the time because I believe it’s one of the best articles written about library tech recently…)

A simple measure that improves privacy and security and saves money is to use open source software instead of proprietary software on public PCs.

Community-driven, non-profit, and not good at making money are just some of the attributes that most cultural heritage organizations and open source project have in common, and yet, when choosing software for their patrons, most libraries and cultural heritage organizations choose proprietary systems and cultural heritage professionals are not the strongest open source contributors or advocates.

The main reasons for this are, in my opinion:


1. Many people in cultural heritage don’t know what Open Source is.

In a recent survey I ran of the Code4Lib and UNC SILS listservs, nearly every person surveyed could accurately respond to the prompt “Define Open Source in one sentence” though the responses varied from community-based answers to answers solely about the source code.

My sample was biased toward programmers and young people (and perhaps people who knew how to use Google because many of the answers were directly lifted from the first line of the Wikipedia article about Open Source, which is definitely survey bias,) but I think that it is indicative of one of the larger questions of open source.

Is open source about the community, or is it about the source code?

There have been numerous articles and books written on this subject, many of which I can refer you to (and I am sure that you can refer me to as well!) but this question is fundamental to our work.

Many people, librarians and otherwise, will ask: (I would argue most, but I am operating on anecdotal evidence)

Why should we care about whether or not the code is open if we can’t edit it anyway? We just send my problems to the IT department and they fix it.

Many people in cultural heritage don’t have many feelings about open source because they simply don’t know what it is and cannot articulate the value of one over the other. Proprietary systems don’t advertise as proprietary, but open source constantly advertises as open source, and as I’ll get to later, proprietary systems have cornered the market.

This movement from darkness to clarity brings most to mind a story that Kathy Lussier told about the Evergreen project, where librarians who didn’t consider themselves “techy” jumped into IRC to tentatively ask a technical question and due to the friendliness of the Evergreen community, soon they were writing the documentation for the software themselves and were a vital part of their community, participating in conferences and growing their skills as contributors.

In this story, the Open Source community engaged the user and taught her the valuable skill of technical documentation. She also took control of the software she uses daily and was able to maintain and suggest features that she wanted to see. This situation was really a win-win all around.

What institution doesn’t want to see their staff so well trained on a system that they can write the documentation for it?


2. The majority of the market share in cultural heritage is closed-source, closed-access software and they are way better at advertising than Open Source companies.

Last year, my very wonderful boss in the cataloging and metadata department of the University of North Carolina at Chapel Hill came back from ALA Midwinter with goodies for me: pens and keychains and postits and tote bags and those cute little staplers. “I only took things from vendors we use,” she told me.

Linux and Firefox OS hold 21% of the world’s operating system marketshare. (Interestingly, this is more globally than IOS, but still half that of Windows. On mobile, IOS and Android are approximately equal.)

Similarly, free, open source systems for cultural heritage are unfortunately not a high percentage of the American market. Wikipedia has a great list of proprietary and open source ILSs and OPACs, the languages they’re written in, and their cost. Marshall Breeding writes that FOSS software is picking up some market share, but it is still “the alternative” for most cultural heritage organizations.

There are so many reasons for this small market share, but I would argue (as my previous anecdote did for me,) that a lot of it has to do with the fact that these proprietary vendors have much more money and are therefore a lot better at marketing to people in cultural heritage who are very focused on their work. We just want to be able to install the thing and then have it do the thing well enough. (An article in Library Journal in 2011 describes open source software as: “A lot of work, but a lot of control.”)

As Jack Reed from Stanford and others have pointed out, most of the cost of FOSS in cultural heritage is developer time, and many cultural heritage institutions believe that they don’t have those resources. (John Brice’s example at the Meadville Public Library proves that communities can come together with limited developers and resources in order to maintain vital and robust open source infrastructures as well as significantly cut costs.)

I learned at this year’s Wikiconference USA that academic publishers had the highest profit margin of any company in the country last year, ahead of Google and Apple.

The academic publishing model is, for more reasons than one, completely antithetical to the ethics of cultural heritage work, and yet they maintain a large portion of the cultural heritage market share in terms of both knowledge acquisition and software. Megan Forbes reminds us that the platform Collection Space was founded as the alternative to the market dominance of “several large, commercial vendors” and that cost put them “out of reach for most small and mid-sized institutions.”

Open source has the chance to reverse this vicious cycle, but institutions have to put their resources in people in order to grow.

While certain companies like OCLC are working toward a more equitable future, with caveats of course, I would argue that the majority of proprietary cultural heritage systems are providing inferior product to a resource poor community.


 3. People are tired and overworked, particularly in libraries, and to compound that, they don’t think they have the skills to contribute.

These are two separate issues, but they’re not entirely disparate so I am going to tackle them together.

There’s this conception outside of the library world that librarians are secret coders just waiting to emerge from their shells and start categorizing datatypes instead of MARC records (this is perhaps a misconception due to a lot of things, including the sheer diversity of types of jobs that people in cultural heritage fill, but hear me out.)

When surveyed, the skill that entering information science students most want to learn is “programming.” However, the majority of MLIS programs are still teaching Microsoft Word and beginning html as technology skills.

Learning to program computers takes time and instruction and while programs like Women who Code and Girl Develop It can begin educating librarians, we’re still faced with a workforce that’s over 80% female-identified that learned only proprietary systems in their work and a small number of technology skills in their MLIS degrees.

Library jobs, and further, cultural heritage jobs are dwindling. Many trained librarians, art historians, and archivists are working from grant to grant on low salaries with little security and massive amounts of student loans from both undergraduate and graduate school educations. If they’re lucky to get a job, watching television or doing the loads of professional development work they’re expected to do in their free time seems a much better choice after work than continuing to stare at a computer screen for a work-related task or learn something completely new. For reference: an entry-level computer programmer can expect to make over $70,000 per year on average. An entry-level librarian? Under $40,000. I know plenty of people in cultural heritage who have taken two jobs or jobs they hate just to make ends meet, and I am sure you do too.

One can easily say, “Contributing to open source teaches new skills!” but if you don’t know how to make non-code contributions or the project is not set up to accept those kinds of contributions, you don’t see an immediate pay-off in being involved with this project, and you are probably not willing to stay up all night learning to code when you have to be at work the next day or raise a family. Programs like Software Carpentry have proven that librarians, teachers, scientists, and other non-computer scientists are willing to put in that time and grow their skills, so to make any kind of claim without research would be a reach and possibly erroneous, but I would argue that most cultural heritage organizations are not set up in a way to nurture their employees for this kind of professional development. (Not because they don’t want to, necessarily, but because they feel they can’t or they don’t see the immediate value in it.)

I could go on and on about how a lot of these problems are indicative of cultural heritage work being an historically classed and feminized professional grouping, but I will spare you right now, although you’re not safe if you go to the bar with me later.

In addition, many open source projects operate with a “patches welcome!” or “go ahead, jump in!” or “We don’t need a code of conduct because we’re all nice guys here!” mindset, which is not helpful to beginning coders, women, or really, anyone outside of a few open source fanatics.

I’ve identified a lot of problems, but the title of this talk is “Creating the Conditions for Open Source Community” and I would be remiss if I didn’t talk about what works.

Diversification, both in terms of types of tasks and types of people and skillsets as well as a clear invitation to get involved are two absolute conditions for a healthy open source community.

Ask yourself the questions: Are you a tight knit group with a lot of IRC in-jokes that new people may not understand? Are you all white men? Are you welcoming? Paraphrasing my colleague Sean Bolton, the steps to an inviting community is to build understanding, build connections, build clarity, build trust, build pilots, which creates a build win-win.

As communities grow, it’s important to be able to recognize and support contributors in ways that feel meaningful. That could be a trip to a conference they want to attend, a Linkedin recommendation, a professional badge, or a reference, or best yet: you could ask them what they want. Our network for contributors and staff is adding a “preferred recognition” system. Don’t know what I want? Check out my social profile. (The answer is usually chocolate, but I’m easy.)

Finding diverse contribution opportunities has been difficult for open source since, well, the beginning of open source. Even for us at Mozilla, with our highly diverse international community and hundreds of ways to get involved, we often struggle to bring a diversity of voices into the conversation, and to find meaningful pathways and recognition systems for our 10,000 contributors.

In my mind, education is perhaps the most important part of bringing in first-time contributors. Organizations like Open Hatch and Software Carpentry provide low-cost, high-value workshops for new contributors to locate and become a part of Open Source in a meaningful and sustained manner. Our Webmaker program introduces technical skills in a dynamic and exciting way for every age.

Mentorship is the last very important aspect of creating the conditions for participation. Having a friend or a buddy or a champion from the beginning is perhaps the greatest motivator according to research from a variety of different papers. Personal connection runs deep, and is a major indicator for community health. I’d like to bring mentorship into our conversation today and I hope that we can explore that in greater depth in the next few hours.

With mentorship and 1:1 connection, you may not see an immediate uptick in your project’s contributions, but a friend tells a friend tells a friend and then eventually you have a small army of motivated cultural heritage workers looking to take back their knowledge.

You too can achieve on-the-ground action. You are the change you wish to see.

Are you working in a cultural heritage institution and are about to switch systems? Help your institution switch to the open source solution and point out the benefits of their community. Learning to program? Check out the Open Hatch list of easy bugs to fix! Are you doing patron education? Teach them Libre Office and the values around it. Are you looking for programming for your library? Hold a Wikipedia edit-a-thon. Working in a library? Try working open for a week and see what happens. Already part of an open source community? Mentor a new contributor or open up your functional area for contribution.

It’s more than just “if you build it, they will come.”

If you make open source your mission, people will want to step up to the plate.

In order to close, I’m going to tell a story that I can’t take credit for, but I will tell it anyway.

We have a lot of ways to contribute at Mozilla. From code to running events to learning and teaching the Web, it can be occasionally overwhelming to find your fit.

A few months ago, my colleague decided to create a module and project around updating the Mozilla Wiki, a long-ignored, frequently used, and under-resourced part of our organization. As an information scientist and former archivist, I was psyched. The space that I called Mozilla’s collective memory was being revived!

We started meeting in April and it became clear that there were other wiki-fanatics in the organization who had been waiting for this opportunity to come up. People throughout the organization were psyched to be a part of it. In August, we held a fantastically successful workweek in London, reskinned the wiki, created a regular release cycle, wrote a manual and a best practice guide, and are still going strong with half contributors and half paid-staff as a regular working group within the organization. Our work has been generally lauded throughout the project, and we’re working hard to make our wiki the resource it can be for contributors and staff.

To me, that was the magic of open source. I met some of my best friends, and at the end of the week, we were a cohesive unit moving forward to share knowledge through our organization and beyond. And isn’t that a basic value of cultural heritage work?

I am still an open source failure. I am not a code fanatic, and I like the ease-of-use of my used IPhone. I don’t listen to techno and write Javscript all night, and I would generally rather read a book than go to a hackathon.

And despite all this, I still feel like I’ve found my community.

I am involved with open source because I am ethically committed to it, because I want to educate my community of practice and my local community about what working open can bring to them.

When people ask me how I got involved with open source, my answer is: I had a great mentor, an incredible community and contributor base, and there are many ways to get involved in open source.

While this may feel like a new frontier for cultural heritage, I know we can do more and do better.

Open up your work as much as you can. Draw on the many, many intelligent people doing work in the field. Educate yourself and others about the value that open source can bring to your institution. Mentor someone new, even if you’re shy. Connect with the community and treat your fellow contributors with respect.Who knows?

You may get an open source failure like me to contribute to your project.

[Source: Planet Mozilla]

Erik Vold: Jetpack Pro Tip - Using The Add-on Debugger With JPM

Did you know that there is an Add-on Debugger in Firefox? good for you!

Now with JPM using the Add-on Debugger is even easier. To use the add-on debugger automatically when using jpm you simply need to add a --debug option.

So the typical:

jpm run -b nightly

Would become:

jpm run -b nightly --debug
[Source: Planet Mozilla]

Ludovic Hirlimann: Tips on organizing a pgp key signing party

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder

4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello my name is ludovic,

I'm a sysadmins at mozilla working remote from europe. I've been
involved with Thunderbird a lot (and still am). I'm organizing a pgp Key
signing party in the Mozilla san francisco office on September the 26th
2014 from 6PM to 8PM.

For security and assurances reasons I need to count how many people will
attend. I'v setup a eventbrite for that at
https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o=
f-trust-stronger-tickets-12867542165
(please take one ticket if you think about attending - If you change you
mind cancel so more people can come).

I will use the eventbrite tool to send reminders and I will try to make
a list with keys and fingerprint before the event to make things more
manageable (but I don't promise).

for those using lanyrd you will be able to use http://lanyrd.com/ccckzw.

Ludovic
ps sent to buug.org,nblug.org end penlug.org - please feel free to post
where appropriate ( the more the meerier, the stronger the web of trust).=

ps2 I have contacted people listed on biglumber to have more gpg related
people show up.

--=20
[:Usul] MOC Team at Mozilla
QA Lead fof Thunderbird
http://sietch-tabr.tumblr.com/ - http://weusepgp.info/

5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

[Source: Planet Mozilla]

Chris McAvoy: Me and Open Badges – Different, but the same

Hi there, if you read this blog it’s probably for one of three things,

1) my investigation of the life of Isham Randolph, the chief engineer of the Chicago Sanitary and Ship canal.
2) you know me and you want to see what I’m doing but you haven’t discovered Twitter or Facebook yet.
3) Open Badges.

This is a quick update for everyone in that third group, the Open Badges crew. I have some news.

When I joined the Open Badges project nearly three years ago, I knew this was something that once I joined, I wouldn’t leave. The idea of Open Badges hits me exactly where I live, at the corner of ‘life long learning’ and ‘appreciating people for who they are’. I’ve been fortunate that my love of life long learning and self-teaching led me down a path where I get to do what I love as my career. Not everyone is that fortunate. I see Open Badges as a way to make my very lucky career path the norm instead of the exception. I believe in the project, I believe in the goals and I’m never going to not work toward bringing that kind of opportunity to everyone regardless of the university they attended or the degree hanging on their wall.

This summer has been very exciting for me. I joined the Badge Alliance, chaired the BA standard working group and helped organize the first BA Technology Council. At the same time, I was a mentor for Chicago’s Tech Stars program and served as an advisor to a few startups in different stages of growth. The Badge Alliance work has been tremendously satisfying, the standard working group is about to release the first cycle report, and it’s been great to see our accomplishments all written in one place. We’ve made a lot of progress in a short amount of time. That said, my role at the Alliance has been focused on standards growth, some evangelism and guiding a small prototyping project. As much as I loved my summer, the projects and work don’t fit the path I was on. I’ve managed engineering teams for a while now, building products and big technology architectures. The process of guiding a standard is something I’m very interested in, but it doesn’t feel like a full-time job now. I like getting my hands dirty (in Emacs), I want to write code and direct some serious engineer workflow.

Let’s cut to the chase – after a bunch of discussions with Sunny Lee and Erin Knight, two of my favorite people in the whole world, I’ve decided to join Earshot, a Chicago big data / realtime geotargeted social media company, as their CTO. I’m not leaving the Badge Alliance. I’ll continue to serve as the BA director of technology, but as a volunteer. Earshot is a fantastic company with a great team. They understand the Open Badges project and want me to continue to support the Badge Alliance. The Badge Alliance is a great team, they understand that I want to build as much as I want to guide. I’m so grateful to everyone involved for being supportive of me here, I can think of dozens of ways this wouldn’t have worked out. Just a bit of life lesson – as much as you can, work with people who really care about you, it leads to situations like this, where everyone gets what they really need.

The demands of a company moving as fast as Earshot will mean that I’ll be less available, but no less involved in the growth of the Badge Alliance and the Open Badges project. From a tactical perspective, Sunny Lee will be taking over as chair of the standard working group. I’ll still be an active member. I’ll also continue to represent the BA (along with Sunny) in the W3C credentials community group.

If you have any questions, please reach out to me! I’ll still have my chris@badgealliance.org email address…use it!

[Source: Planet Mozilla]

Christian Heilmann: Reconnecting at TEDxLinz – impressions, slides, resources

I just returned from Linz, Austria, where I spoke at TEDxLinz yesterday. After my stint at TEDxThessaloniki earlier in the year I was very proud to be invited to another one and love the variety of talks you encounter there.

TEDx_Linz_2014-5783

The overall topic of the event was “re-connect” and I was very excited to hear all the talks covering a vast range of topics. The conference was bilingual with German (well, Austrian) talks and English ones. Oddly enough, no speaker was a native English speaker.

TEDx_Linz_2014-5622

My favourite parts were:

  • Ingrid Brodnig talking about online hate and how to battle it
  • Andrea Götzelmann talking about re-integrating people into their home countries after emigrating. A heart-warming story of helping people out who moved out and failed just to return and succeed
  • Gergely Teglasy talking about creating a crowd-curated novel written on Facebook
  • Malin Elmlid of The Bread Exchange showing how her love of creating your own food got her out into the world and learn about all kind of different cultures. And how doing an exchange of goods and services beats being paid.
  • Elisabeth Gatt-Iro and Stefan Gatt showing us how to keep our relationships fresh and let love listen.
  • Johanna Schuh explaining how asking herself questions about her own behaviour rather than throwing blame gave her much more peace and the ability to go out and speak to people.
  • Stefan Pawel enlightening us about how far ahead Linz is compared to a lot of other cities when it comes to connectivity (150 open hot spots, webspace for each city dweller)

The location was the convention centre of a steel factory and the stage setup was great and not over the top. The audience was very mixed and very excited and all the speakers did a great job mingling. Despite the impressive track record of all of them there was no sense of diva-ism or “parachute presenting”.
I had a lovely time at the speaker’s dinner and going to and from the hotel.

The hotel was a special case in itself: I felt like I was in an old movie and instead of using my laptop I was tempted to grow a tufty beard and wear layers and layers of clothes and a nice watch on a chain.

hotel room

My talk was about bringing the social back into social media or – in other words – stopping to chase numbers of likes and inane comments and go back to a web of user generated content that was done by real people. I have made no qualms about it in the past that I dislike memes and animated GIFs cropped from a TV series of movie with a passion and this was my chance to grand-stand about it.

I wanted the talk to be a response to the “Look up” and “Look down” videos about social oversharing leading to less human interaction. My goal was to move the conversation into a different direction, explaining that social media is for us to put things we did and wanted to share. The big issue is that the addiction-inducing game mechanisms of social media platforms instead lead us to post as much as we can and try to be the most shared instead of the creators.

This also leads to addiction and thus to strange online behaviour up to over-sharing materials that might be used as blackmail opportunities against us.

My slides are on Slideshare.

Resources I covered in the talk:

Other than having a lot of fun on stage I also managed to tick some things off my bucket list:

TEDx_Linz_2014-5792

  • Vandalising a TEDx stage
  • Being on stage with my fly open
  • Using the words “sweater pillows” and “dangly bits” in a talk

I had a wonderful time all in all and I want to thank the organisers for having me, the audience for listening, the other speakers for their contribution and the caterers and volunteers for doing a great job to keep everybody happy.

TEDx_Linz_2014-5806

[Source: Planet Mozilla]

Jeff Walden: Minor changes are coming to typed arrays in Firefox and ES6

JavaScript has long included typed arrays to efficiently store numeric arrays. Each kind of typed array had its own constructor. Typed arrays inherited from element-type-specific prototypes: Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, and so on. Each of these prototypes contained useful methods (set, subarray) and properties (buffer, byteOffset, length, byteLength) and inherited from Object.prototype.

This system is a reasonable way to expose typed arrays. Yet as typed arrays have grown, it’s grown unwieldy. When a new typed array method or property is added, distinct copies must be added to Int8Array.prototype, Float64Array.prototype, Uint32Array.prototype, &c. Likewise for “static” functions like Int8Array.from and Float64Array.from. These distinct copies cost memory: a small amount, but across many tabs, windows, and frames it can add up.

A better system

ES6 changes typed arrays to fix these issues. The typed array functions and properties now work on any typed array.

var f32 = new Float32Array(8); // all zeroes
var u8 = new Uint8Array([0, 1, 2, 3, 4, 5, 6, 7]);
Uint8Array.prototype.set.call(f32, u8); // f32 contains u8's values

ES6 thus only needs one centrally-stored copy of each function. All functions move to a single object, denoted %TypedArray%.prototype. The typed array prototypes then inherit from %TypedArray%.prototype to expose them.

assertEq(Object.getPrototypeOf(Uint8Array.prototype),
         Object.getPrototypeOf(Float64Array.prototype));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array.prototype)),
         Object.prototype);
assertEq(Int16Array.prototype.subarray,
         Float32Array.prototype.subarray);

ES6 also changes the typed array constructors to inherit from the %TypedArray% constructor, on which functions like Float64Array.from and Int32Array.of live. (Neither function yet in Firefox, but soon!)

assertEq(Object.getPrototypeOf(Uint8Array),
         Object.getPrototypeOf(Float64Array));
assertEq(Object.getPrototypeOf(Object.getPrototypeOf(Int32Array)),
         Function.prototype);

I implemented these changes a few days ago in Firefox. Grab a nightly build and test things out with a new profile.

Conclusion

In practice this won’t affect most typed array code. Unless you depend on the exact [[Prototype]] sequence or expect typed array methods to only work on corresponding typed arrays (and thus you’re deliberately extracting them to call in isolation), you probably won’t notice a thing. But it’s always good to know about language changes. And if you choose to polyfill an ES6 typed array function, you’ll need to understand %TypedArray% to do it correctly.

[Source: Planet Mozilla]

Eric Shepherd: The Sheppy Report: September 26, 2014

I can’t believe another week has already gone by. This is that time of the year where time starts to scream along toward the holidays at a frantic pace.

What I did this week

  • I’ve continued working heavily on the server-side sample component system
    • Implemented the startup script support, so that each sample component can start background services and the like as needed.
    • Planned and started implementation work on support for allocating ports each sample needs.
    • Designed tear-down process.
  • Created a new “download-desc” class for use on the Firefox landing page on MDN. This page offers download links for all Firefox channels, and this class is used to correct a visual glitch. The class has not as yet been placed into production on the main server though. See this bug to track landing of this class.
  • Updated the MDN administrators’ guide to include information on the new process for deploying changes to site CSS now that the old CustomCSS macro has been terminated on production.
  • Cleaned up Signing Mozilla apps for Mac OS X.
  • Created Using the Mozilla build VM, based heavily on Tim Taubert’s blog post and linked to it from appropriate landing pages.
  • Copy-edited and revised the Web Bluetooth API page.
  • Deleted a crufty page from the Window API.
  • Meetings about API documentation updates and more.

Wrap up

That’s a short-looking list but a lot of time went into many of the things on that list; between coding and research for the server-side component service and experiments with the excellent build VM (I did in fact download it and use it almost immediately to build a working Nightly), I had a lot to do!

My work continues to be fun and exciting, not to mention outright fascinating. I’m looking forward to more, next week.

[Source: Planet Mozilla]

More News


Sponsored by: