Konstantinos Antonakoglou: A Creative Commons music video made out of other CC videos
Hello! Let’s go straight to the point. Here is the video:
…and here are the videos that were used having the Creative Commons Attribution licence: http://wonkydollandtheecho.com/thanks.html. They are downloadable via Vimeo, of course.
Videos available from NASA and the ALMA observatory were also used.
The video (not audio) is under the Creative Commons BY-NC-SA licence, which I think is quite reasonable since every scene used from the source videos (ok, almost every scene) has lyrics/graphics embedded on it.
I hope you like it! I didn’t have a lot of time to make this video but I like the result. The tools I used are not open source unfortunately, because the learning curve for these tools is quite steap. I will definitely try them in the future. Actually, I really haven’t come across any alternative to Adobe After Effects. You might say Blender…but is it really an alternative? Any thoughts?
PS. More news soon for the Sopler project (a web application for making to-do lists) and other things I’ve been working on lately (like MQTT-SN).
[Source: Planet Mozilla]
Brendan Eich: MWC 2014, Firefox OS Success, and Yet More Web API Evolution
Just over a week ago, I left Barcelona and Mobile World Congress 2014, where Mozilla had a huge third year with Firefox OS.
We announced the $25 Firefox OS smartphone with Spreadtrum Communications, targeting retail channels in emerging markets, and attracting operator interest to boot. This is an upgrade for those channels at about the same price as the feature phones selling there today. (Yes, $25 is the target end-user price.)
We showed the Firefox OS smartphone portfolio growing upward too, with more and higher-end devices from existing and new OEM partners. Peter Bright’s piece for Ars Technica is excellent and has nice pictures of all the new devices.
We also were pleased to relay the good news about official PhoneGap/Cordova support for Firefox OS.
We were above the fold for the third year in a row in Monday’s MWC daily.
(Check out the whole MWC 2014 photo set on MozillaEU’s Flickr.)
As I’ve noted before, our success in attracting partners is due in part to our ability to innovate and standardize the heretofore-missing APIs needed to build fully-capable smartphones and other devices purely from web standards. To uphold tradition, here is another update to my progress reports from last year and from 2012.
First, and not yet a historical curiosity: the still-open tracking bug asking for “New” Web APIs, filed at the dawn of B2G by Andreas Gal.
Next, links for “Really-New” APIs, most making progress in standards bodies:
Yet more APIs, some new enough that they are not ready for standardization:
Finally, the lists of new APIs in Firefox OS 1.1, 1.2, and 1.3:
This is how the web evolves: by implementors championing and testing extensions, with emerging consensus if at all possible, else in a pref-enabled or certified-app sandbox if there’s no better way. We thank colleagues at W3C and elsewhere who are collaborating with us to uplift the Web to include APIs for all the modern mobile device sensors and features. We invite all parties working on similar systems not yet aligned with the emerging standards to join us.
/be [Source: Planet Mozilla]
James Long: Open-Sourcing My Gambit Scheme iOS Game from 2010
Back in 2009-2010, I got Gambit Scheme running on iOS and decided to build a game with it. The result was Farmaggedon, a stupid game where you blow up farm animals to avoid being hit by them.
I blogged about my progress working with Scheme on iOS back then and evidently a lot of people were inspired by it. This was the main blog post, in addition to a bunch of videos. Recently another iOS game was featured on Hacker News that was written in Gambit Scheme, and it inspired me to dredge up the source of my game and completely open source it and talk about it.
I used to work with Lang Martin and Ben Weaver at a small webdev shop right out of college. They were a little older than me and far more technically grounded than I was at the time. Occasionally I would hear "lisp" and "scheme" murmured around the office while trying to focus on my C++ game engine side project, and I thought they were just trying to sound cool.
Boy was my mind about to be blown. Eventually we all decided to play around with Scheme and see if we could use it internally. I knew nothing about it, but I tried to keep up with the conversation and more often than not ended up saying foolish things. Tired of feeling out of my depth, I committed to studying Scheme and it still influences me to this day. This is why it's so important to surround yourself with people smarter than you. I got lucky.
Fast-forward a few years later, I was feeling burned out at my job and decided to quit and try freelancing. I set aside the first few months to try and make an iOS game (this was right around the time iOS was exploding). Having fallen in love with Scheme, I endeavoured to make a game with Scheme and prove that it can be practical and performant, as well as making you more productive.
And so I made Farmageddon.
Show Me the Source!
Enough talking, here's the source. You're looking at a completely unfiltered, raw project. Everything I was thinking of is in there somewhere. You're also looking at the messiest project with the worst code, ever.
I was so naïve back then. Set aside a couple months to build a game from scratch, including porting a whole language to a completely new platform? Are you kidding me?
I ported Gambit Scheme to iOS, which basically just means cross-compiling with the right options and writing the necessary FFIs. The actual port wasn't too much work, which was exciting but dangerous because it blinded me to the fact that I would have to build everything myself. Not only was I lacking an OpenGL rendering library, I didn't even have access to the OpenGL API. I had to write an FFI for that. (Actually, I wrote a Scheme program that parsed C++ header files and auto-extracted it.)
Additionally, I created sounds, 3d models, game mechanics, user interfaces, and a basic 3d engine. See all the resources here. I did hire a local designer to make some really cool gritty nuclear farm graphics for the game, but everything else I did myself. Which is why the game is terrible.
Regardless of how badly Farmageddon failed commercially, it was one of the most transformative experiences of my life. I learned tons about project scope, marketing, games, and a lot of other stuff. But even more, I got to experience working in a minimal but powerful language that I could shape to my needs, with a REPL/debugger always there to incrementally play with things.
It wasn't just continuations, green threads, macros, records, and tail-call optimizations that made me a better programmer. It was the idea of incremental development, where you could always redefine a function at run-time to try something new, or inspect and change any data structure. We've come close to that with browser devtools, but the experience still isn't quite what it should be.
So if you haven't aready, you really should learn a Lisp. Personally I like Gambit, but Chicken and Racket are really good too. Clojure is great too, just a different flavor because it's not a minimal Scheme. It doesn't matter. Learn one of them.
These are some videos I made showing off the real-time REPL and debugger. The first two were the most popular.
There are a few other ones as well.
The code is incredibly messy, but I feel warm and nostalgic looking at it. There are a few interesting things to point out about it.
Most of the Obj-C code is in
src/app. The entry point is in main.m which initializes and configures the Gambit virtual machine.
EAGLView.mm is where most of the code lies to interact with the iOS UI.
The main entry point for Scheme is in
src/init.scm. At that bottom of the file are two FFI functions:
c-render. Those are exposed as
render at the C level and the Obj-C code calls into them.
All of the FFIs are in
src/ffi. I think I wrote most of them by hand, and auto-generated a few of them. What's need about Gambit is that you can embed any kind of C/C++/Obj-C code. For example, here is the FFI for invoking methods in the iOS view for changing the UI. The scheme methods embed Obj-C code straight into them. You can see more of this in the iOS FFI which lets me allocate native iOS data structures. Lastly, you can see my attempts at optimizations by converting Scheme vectors into native C arrays.
The main game loop is in
farmageddon.scm. Most of the work is in the various screens, like
level.scm which renders and updates the main game.
The main component of the game engine is in
src/lib/scene.scm. I used Gambit's native record types and wrote a macro to generate fields that dynamically dispatched of the type for making game entities.
All of my tests were simply top-level Scheme code that I live evaluated when the game was running. No automation for me!
Gambit has a powerul cooperative threading system, and I used it extensively. The game and sound system each had a thread and would send messages to the main thread for changing the game state. Each level had a thread running to fire off events at random intervals, and I could simply call
thread-sleep! to wait for a certain period. Note that these aren't real threads, just cooperative so it was all safe.
The remote debugger is in the
emacs directory and my Emacs integration was called grime. Since I had a live REPL to my game in Emacs, I even wrote helper functions in Emacs to change game state and bound them to keys so I could quickly invoke them.
There's a lot more in there, and like I said it's very messy. But there's a lot of gems in there too. I hope it continues to inspire others.
[Source: Planet Mozilla]
Ben Hearsum: This week in Mozilla RelEng – March 7th, 2014
Completed work (resolution is ‘FIXED’):
- Balrog: Backend
- General Automation
- Loan Requests
- Platform Support
- Release Automation
- Repos and Hooks
In progress work (unresolved and not assigned to nobody):
[Source: Planet Mozilla]
- Balrog: Backend
- Balrog: Frontend
- General Automation
- Loan Requests
- Platform Support
- Release Automation
- Repos and Hooks
Selena Deckelmann: Weekly Feminist Work in Tech by Mozillians roundup – Week of March 3, 2014
We have a ton of individual work done by MoFo and MoCo employees related to feminism, feminist activism and the larger technology community. So much is happening, I can barely keep track!
I’ve reached out to a few people I work with to get some highlights and spread the word about interesting projects we’re all working on. If you are a Mozillian and occasionally or regularly work on feminist issues in the tech community, please let me know! My plan is to ping people every Friday morning and post a blog post about what’s happened in the last week.
Without further ado:
Dispatch from me, Selena Deckelmann:
- I’m presenting at SF Github HQ on Thurs March 13, 7pm as part of the Passion Projects series (Julie Horvath’s project). I’ll be talking about teaching beginners how to code and contribute to open source, specifically through my work with PyLadies. I’m giving a similar talk this afternoon at Portland State University to their chapter of the ACM.
- Just wrapped up a Git workshop for PyLadiesPDX and am gearing up for a test-run of a “make a Flask blog in 80-lines of code” workshop! Course materials are available here for “intro to git” workshops.
- Lukas, Liz, me and others (I’m not sure who all else!!) are coordinating a Geekfeminism and feminist hackerspace meetup at PyCon 2014. The details aren’t published yet, so stay tuned!
- PyLadies PyCon 2014 lunch is happening again!
- PyLadies will also be holding a Mani-Pedi party just like in 2013. Stay tuned for details!
- Brownbags for the most recent GNOME Outreach Program for Women contributors are scheduled for next Friday March 14, 10am and 2pm. (thanks Larissa!!) Tune in at http://air.mozilla.com. One of the GNOME Outreach Program for Women contributors is Jennie Rose Halperin.
Dispatch from Liz Henry:
- I’m doing a lot of work to support Double Union feminist hackerspace, a nonprofit in San Francisco. We are hosting tech and arts workshops, and establishing connections with other hackerspaces in the US and around the world. Lukas is also involved with this effort! We have over 100 members now using the space.
- For PyCon I would like to host fairly informal sessions in our Feminist Hacker Lounge, on QA, bug triaging, and running/writing WebQA automated tests with pytest and selenium.
- I’m hoping to have funding for an OPW intern for this upcoming round to work on the back end of a QA community facilitating tool, using Python and various APIs for Mozilla tools like Bugzilla, Moztrap, and the Mozillians profiles.
Dispatch from Lukas Blakk:
- Just held the Lesbians Who Tech hackathon at the Mozilla SF space and it was an amazing weekend of networking, recruiting for Mozilla, doing a stump speech on the radical/political possibilities of open source, and also just a lot of social fun.
- I’m nearing the point of Project Kick Off for The Ascend Project which will be a 6 week training course for underrepresented in current tech mainstream (and underemployed/unpaid) persons who will learn how to write automatable tests for MozMill. This first one will take place at the Portland office in Sept/Oct 2014 (Starts on Sept 8th). There’s so much more here, but this is just a sound bite.
- I’m trying to determine what budget I can get agreement on to put towards women in tech outreach this year.
- PyCon – yes! Such Feminist, So Hackerspace, Much gathering of geek feminists!
Dispatch from Larissa Shapiro:
- OPW wrapup and next session – we’re wrapping up the current round, scheduling brownbags for two of the current interns, etc. Funding is nearly secured for the next round and we have like 6 willing mentors. w00t.
- I’m also providing space for/speaking at an upcoming event in the Mountain View office: last year’s African Techwomen emerging leaders were part of a documentary and the Diaspora African Women’s Network is holding a screening and a planning session for how to support next year’s ELs and other African and African-American bay area women in tech both through this and other projects, March 29. Open to Mozilla folks, let me know if you’re interested.
Anything else that’s come up in the last week, or that you’d like Mozillians to know about? Let me know in the comments! [Source: Planet Mozilla]
Ludovic Hirlimann: Thunderbird 28.0b1 is out and why you should care
We’ve just released another beta of Thunderbird. We are now in the middle of the release cycle until the next major version is released to our millions of daily users. (we’ve fixed 200+ bugs since the last major release (version 24)). We currently have less than 1% of our users - using the beta and that’s not enough to catch regressions - because Thunderbird offers mail, newsgroups and rss feeds we can’t cover the usage of our user base. Also many companies out there sell extensions for spam filtering, for virus protection and so forth. The QA community just doesn’t have the time to try all these and run these with Thunderbird betas to find issues.
And that’s where you dear reader can help. How you might ask well here is a list of examples of how you can help :
- Use the beta yourself it’s downloadable at http://www.mozilla.org/en-US/thunderbird/all-beta.html
- spread the word by sharing this with your friends
- you are a reps ? make this post available in your language ….
- You help on a support forum - link to the beta download page explaining why it’s important to have more users on beta
- You work for a vendor that sell a product that integrates in Thunderbird ? Qualify your product with the beta so when we ship final we both won’t get surprises.
- Your company uses Thunderbird ? Setup a small group of beta users and gather the bugs, issues and let us know.
If you find issues let us know either thru bugzilla or thru the support forums, so we can try to address them.
ps the current download page says English only because of a bug in our build infrastructure for windows. Linux and Mac builds are available localized. [Source: Planet Mozilla]
Al Billings: TrustyCon Videos Available
TrustyCon 2014 (maybe the only one ever) happened the other week as a competitor to the RSA convention because of perceived RSA collaboration with the NSA and all of the kerfuffle around the NSA and surveillance this last year. As they say on their site, “We welcome all security researchers, practitioners and citizens who are interested in discussing the technical, legal and ethical underpinnings of a stronger social contract between users and technology.”
The event sold out quickly so I was unable to attend. Helpfully, it was livestreamed, making it available to everyone and the resulting video was put up on youtube. Unfortunately, this video is one, ginormous, seven hour video. I don’t know about you but I like my viewing in smaller chunks. I also tend to listen to talks and presentations, especially when there is no strong visual component, by saving the audio portion of it to my huffduffer account and listening to the resulting feed as a podcast.
I took it on myself to do a quick and dirty slice and dice on the seven plus hour video. It isn’t perfect (I’m a program manager, not a video editor!) but it works. I’ve uploaded the resulting videos to my youtube channel in order to not destroy any servers I own. You can find the playlist of them all here but I’ve also included the videos embedded below.
TrustyCon 2014 - Opening Remarks
TrustyCon 2014 - The Talk I Was Going to Give at RSA
TrustyCon 2014 - The Laws and Ethics of Trustworthy Technology
TrustyCon 2014 - Joseph Menn Interviews Bruce Schneier
TrustyCon 2014 - Securing SecureDrop
TrustyCon 2014 - New Frontiers in Cryptography
TrustyCon 2014 - Trusted Computing Tech and Government Implants
TrustyCon 2014 - Community Immunity
TrustyCon 2014 - Redesigning NSA Programs to Protect Privacy
[Source: Planet Mozilla]
TrustyCon 2014 - Thank You and Goodbye
Peter Bengtsson: Github Pull Request Triage tool
Last week I built a little tools called github-pr-triage. It's a single page app that sits on top of the wonderful GitHub API v3.
Its goal is to try to get an overview of what needs to happen next to open pull requests. Or rather, what needs to happen next to get it closed. Or rather, who needs to act next to get it closed.
It's very common, at least in my team, that someone puts up a pull request, asks someone to review it and then walks away from it. She then doesn't notice that perhaps the integrated test runner fails on it and the reviewer is thinking to herself "I'll review the code once the tests don't fail" and all of a sudden the ball is not in anybody's court. Or someone makes a comment on a pull request that the author of the pull requests misses in her firehose of email notifictions. Now she doesn't know that the comment means that the ball is back in her court.
Ultimately, the responsibility lies with the author of the pull request to pester and nag till it gets landed or closed but oftentimes the ball is in someone elses court and hopefully this tool makes that clearer.
Here's an example instance: https://prs.paas.allizom.org/mozilla/socorro
Currently you can use prs.paas.allizom.org for any public Github repo but if too many projects eat up all the API rate limits we have I might need to narrow it down to use mozilla repos. Or, you can simply host your own. It's just a simple Flask server
About the technology
I'm getting more and more productive with Angular but I still consider myself a beginner. Saying that also buys me insurance when you laugh at my code.
So it's a single page app that uses HTML5
pushState and an angular
$routeProvider to make different URLs.
The server simply acts as a proxy for making queries to
bugzilla.mozilla.org/rest and the reason for that is for caching.
Every API request you make through this proxy gets cached for 10 minutes. But here's the clever part. Every time it fetches actual remote data it stores it in two caches. One for 10 minutes and one for 24 hours. And when it stores it for 24 hours it also stores its last ETag so that I can make conditional requests. The advantage of that is you quickly know if the data hasn't changed and more importantly it doesn't count against you in the rate limiter. [Source: Planet Mozilla]
Jim Chen: Fennec App Not Responding (ANR) Dashboard
Over the last few months, I've been working on an improved App Not Responding (ANR) dashboard for Fennec, which is now hosted at telemetry.mozilla.org/hang/anr. With the help of many people, I'm glad to say that the dashboard is now mature enough to be a useful tool for Fennec developers.
The idea of ANR/hang reporting is similar to crash reporting — every time the Fennec UI becomes unresponsive for more than five seconds, Android would show an “App Not Responding” dialog; the ANR Reporter detects this condition and collects these information about the hang:
Stacks for Java threads in Fennec
Stacks for Gecko threads (C++ stacks and profiler pseudo-stacks)
System information listed in about:telemetry
Fennec logs to help debug the hang
The ANR Reporter is enabled on Nightly and Aurora builds only, and if the user has not opted out of telemetry, the collected information is sent back to Mozilla, where the data are aggregated and presented through the ANR Dashboard. Because the debug logs may contain private information, they are not processed and are only available internally, within Mozilla.
The ANR Dashboard presents weekly aggregated data collected through the ANR reporter. Use the drop-down list at the top of the page to choose a week to display.
Data for each week are then grouped by certain parameters from ANR reports. The default grouping is “appName”, and because ANR reports are specific to Fennec, you only see one column in the top hangs chart labeled “Fennec”. However, if you choose to group by, for example, “memsize”, you will see many columns in the chart, with each column representing a different device memory size seen from ANR reports.
Each column in the top hangs chart shows the number of hangs, and each column is further divided into blocks, each representing a different hang. Hover over the blocks to see the hang stack and the number of hangs. This example shows 8 hangs with that signature occurred on devices with 768MB of memory over the past week.
Colors are preserved across columns, so the same colored blocks all represent the same hang. The blue blocks at the bottom represent all hangs outside of the top 10 list.
To the right of the top hangs chart is the distributions chart. It shows how different parameters are distributed for all hangs. Hover over each block to see details. This example shows 36% of all hangs occurred on devices running Android API level 15 (corresponding to Android 4.0.3-4.0.4 Ice Cream Sandwich) over the past week.
The distributions chart can also be narrowed down to specific groups. This would let us find out, for example, on devices having 1GB of memory, what is the percentage of hangs occurring on the Nightly update channel.
Clicking on a block in the top hangs chart bring up a Hang Report. The hang report is specific to the column that you clicked on. For example, if you are grouping by “memsize”, clicking on a hang in the “1G” column will give you one hang report and clicking on the same hang in the “2G” column will give you a different hang report. Switch grouping to “appName” if you want to ignore groups — in that case there is only one column, “Fennec”.
The hang report also contains a distributions chart specific to the hang. The example above shows that 14% of this hang occurred on Nexus 7 devices.
In addition, the hang report contains a builds chart that shows the frequency of occurrence for different builds. This example shows there was one hang from build 20140224030203 on the 30.0a1 branch over the past week. The chart can be very useful when verifying that a hang has been fixed in newer builds.
Last but not least, the hang report contains stacks from the hang. The stacks in the hang report are more detailed than the stack shown on the main page. You can also look at stacks from other threads — useful for finding deadlocks!
When comparing the volume of hangs, a higher number can mean two things — the side with higher number is more likely to hang, or the side with higher number has more usage. For example, if we are comparing hangs between devices A and B, and A has a higher number of hangs. It is possible that A is more prone to hanging; however, it is also possible that A simply has more users and therefore more chances for hangs to occur.
To provide better comparisons, the ANR Dashboard has a normalization feature that tries to account for usage. Once “Normalize” is enabled at the top of the dashboard, all hang numbers in the dashboard will be divided by usage as measured by reported uptime. Instead of displaying the raw number of hangs, the top hangs chart will display the number of hangs per one thousand user-hours. For example, 10 hangs per 1k user-hour means, on average, 1000 users each using Fennec for one hour will experience 10 hangs combined; or equivalently, one user using Fennec for 1000 hours will experience 10 hangs total. The distributions chart is also updated to reflect usage.
As a demonstration, the image below shows un-normalized hangs grouped by device memory size. There is no clear trend among the different values.
The image below shows normalized hangs based on the same data. In this case, it is clear that, once usage is accounted for, higher device memory size generally corresponds to lower number of hangs. Note that the “unknown” column became hidden because there is not enough usage data for devices with “unknown” memory size.
At the moment, I think uptime is the best available measurement for usage. Hopefully there will be a better metric in the future to provide more accurate results. Or let me know if it already exists!
[Source: Planet Mozilla]
Pierros Papadeas: Contribution Activity Metrics – Early attempts and fails
As we examined with the intro post, the need for contribution activity metrics for different contribution areas in Mozilla has been high. It was only logical that many attempts were made to address this issue, mainly on the area-level (and not in Mozilla-wide level). Almost all of them had zero interaction between each other, and there was a general lack of vision for an holistic approach to the problem.
After one of our initial gatherings as the (then meta-) Community Building Team, a couple of people brainstormed together a possible solution to our problem. Together with Josh Matthews, Giorgos Logiotatidis, Ricky Rosario and Liz Henry a new approach was born. Enter project Blackhole!
Project Blackhole was a collaborative effort to develop and maintain an infrastructure of gathering and serving raw contribution data within Mozilla. We created a data architecture and flow together with a data Schema and specification to describe contribution activities for the first time in Mozilla. The project went far enough (thanks to Josh) to create a working prototype for back-end and front-end.
What went right:
Having a single project to drive multiple metrics efforts forward got people engaged. Everyone saw the value of de-duplicating efforts and tapping into that as a resource. Also during the process of designing and testing it we were able to self-identify as a group of people that share interest and commitment towards a common goal. Most of those people went on to become active members of the Systems and Data Working Group. Finally, we ended up with a common language and descriptions around contribution activities, a really valuable asset to have for the future of cross-project tracking.
What went wrong:
Building *anything* from scratch can be hard. Really hard. First, everyone (rightfully) questions the need to build something instead of re-using what is out there. Once you get everyone on board, development and deployment resources are hard to find especially on a short notice. On top of that Blackhole’s architecture *seemed* logical enough in theory, but was never tested in scale so everyone involved was not 100% sure that our architecture would survive stress tests and the scale of Mozilla’s contribution ecosystem.
PRO TIP: Changing the project name does not help. We went from “Blackhole” to “Wormhole” (and back to “Blackhole”?), to better reflect the proposed data flow (data would not disappear forever!) and people got confused. Really confused. Which is obviously something that is not helpful during conversations. Pick a name, and stick to it!
Lack of a team dedicated to it and inability to get the project listed as a personal goal of people (or teams), halted any progress leading us to a fearsome dead-end.
What we learned:
As with most failures, this one was also really valuable. We learned that:
- we need to be a top line goal for people and teams
- we need to examine really well what is out there (internally or externally to Mozilla) and investigate the possibility of re-using it.
- we need a clear and common language to make communications as effective as possible
- we need to be inclusive in all our procedures as a working group, with volunteers as well as all paid staff.
- and in true Mozilla fashion: we need to start small, test and iterate with a focus on modularity.
A way forward?
Having those lessons learned from the process, we sat down last December as a group and re-aligned. We addressed all 5 issues and now we are ready to move forward. And the name of it? Baloo. Stay tuned for more info on our next detailed post
[Source: Planet Mozilla]
Andrew Halberstadt: Add more mach to your B2G
tl;dr - It is possible to add more mach to your B2G repo! To get started, install pip:
$ wget https://raw.github.com/pypa/pip/master/contrib/get-pip.py -O - | python
$ pip install b2g-commands
To play around with it, cd to your B2G repo and run:
$ git pull # make sure repo is up to date
$ ./mach help # see all available commands
$ ./mach help <command> # see additional info about a command
Most people who spend the majority of their time working within mozilla-central have probably been
acquainted with mach. In
case you aren't acquainted, mach is a generic command dispatching tool. It is possible to write scripts
called 'mach targets' which get registered with mach core and transformed into commands. Mach targets
in mozilla-central have access to all sorts of powerful hooks into the build and test infrastructure
which allow them to do some really cool things, such as bootstrapping your environment, running builds
and tests, and generating diagnostics.
A contributor (kyr0) and I have been working on a side project called b2g-commands
to start bringing some of that awesomeness to B2G. At the moment b2g-commands wraps most of the major
B2G shell scripts, and provides some brand new ones as well. Here is a summary of its current features:
- Bootstrap your environment - sets up system packages needed to build (includes setting up gcc-4.6)
- Easy to discover arguments - no need to memorize or look up random environment variables
- Helpful error messages where possible - clear explanations of what went wrong and how to fix it
- Fully compatible with existing build system including .userconfig
- List Android vendor ids for udev rules
- Clobber objdir/out directories
I feel it's important to re-iterate, that this is *not* a replacement for the current build system. You
can have b2g-commands installed and still keep your existing workflows if desired. Also important to note is
that there's a good chance you'll find bugs (especially related to the bootstrap command on varying platforms),
or arguments missing from your favourite commands. In this case please don't hesitate to contact me or
file an issue. Or, even better, submit a pull
If the feature set feels a bit underwhelming, that's because this is just a first iteration. I think
there is a lot of potential here to add some really useful things.
Unfortunately, this is just a side project I've been working on and I don't have as much time to devote
to it as I would like. So I encourage you to submit pull requests (or at least submit an issue) for any
additional functionality you would like to see. In general I'll be very open to adding new features.
In the end, because this module lives outside the build system, it will only ever be able to wrap existing
commands or create new ones from scratch. This means it will be somewhat limited in what it is capable of
providing. The targets in this module don't have the same low-level hooks into the B2G and gaia repos like
the targets for desktop do into gecko. My hope is that if a certain feature in this module turns out to
be especially useful and/or widely used it'll get merged into the B2G repo and be available by default.
Eventually my hope is that we implement some deeper mach integration into the various B2G repos (especially
gaia) which would allow us to create even more powerful commands. I guess time will tell.
[Source: Planet Mozilla]
Christian Heilmann: Translating marketing texts for speaking – an experiment
As part of the workweek I am currently at I set a goal to give a brownbag on “writing for speaking”. The reasons is that some of the training materials for the Mobile World Congress I recorded were great marketing/press materials but quite a pain to speak into a camera reading them from a teleprompter.
For the record: the original text is a good press release or marketing article. It is succinct, it is full of great soundbites and it brings the message across. It is just not easy to deliver. To show the issues and explain what that kind of wording can come across as I took the script apart. I explained paragraph by paragraph what the problems are and proposed a replacement that is more developer communication friendly. You can see the result on GitHub:
The result is an easier to deliver text with less confusion. Here’s a recording of it to compare.
I will follow this up with some more materials on simpler communication for speaking soon. [Source: Planet Mozilla]
Lawrence Mandel: Lawrence Mandel Joins Mozilla Release Management
I’m excited to share that I am stepping into a new role with Mozilla as manager of the Release Management team. Below is an e-mail that my friend and manager Sheila Mooney sent to Mozilla employees last week announcing this change.
Date: Fri, 28 Feb 2014 11:19:07 -0800 (PST)
From: Sheila Mooney
To: team Mozilla
Subject: Changes in Release Management
I am happy to share some changes I am making to my team. Effective immediately, Lawrence Mandel will be moving into the role of Manager of the Release Management team. With the Release Managers in tight collaboration with the Project/Program Managers, we can think beyond just keeping the trains running on time and tighten our focus on quality, metrics and process to ensure we are shipping the best possible products to our users. Lawrence's experience inside and outside Mozilla aligns closely with these goals and I am very excited to see what he does with this role!
Lawrence will be transitioning many of his current project management responsibilities to others in my team in order to focus fully on this new challenge. The Web Compatibility Engineers will continue to report to him and Chris Peterson will report to me
Please join me in congratulating Lawrence on his new opportunity!
Tagged: mozilla, release management [Source: Planet Mozilla]
Byron Jones: happy bmo push day!
the following changes have been pushed to bugzilla.mozilla.org:
-  add the “Preview” mode for attachment comments
-  Make the dashboard white-on-red counter easier to click
-  “Your Outstanding Requests” emails don’t include superreview requests
-  develop a system to track the lifetime of review/feedback/needinfo requests
-  all tracking flags are visible on the ‘change many bugs at once’ page
-  Create product and affiliations for Intellego project
-  grammar issue
-  join_activity_entries doesn’t reconstitute text with commas correctly.
-  enable USE_MEMCACHE on most objects
-  improve instrumentation of bugzilla’s internals
-  changing timezone breaks MyDashboard
-  increase the mod_perl sizelimit to 700_000 on production
-  Fix content-type for woff files
-  Comment and Preview tabs need accessibility markup
-  Comment textarea has padding:0
-  ReferenceError: REVIEW is not defined page.cgi javascipt error when viewing a patch in Splinter
-  Please rename Talkilla product to Loop and update User Stories extension
discuss these changes on mozilla.tools.bmo.
Filed under: bmo, mozilla [Source: Planet Mozilla]
Jess Klein: On Designing BadgeKit After several months of hard work by the Open Badges team, we are announcing that BadgeKit is available for access to Private Beta. This means that BadgeKit is now available in two forms: a hosted version of Mozilla BadgeKit available in private beta for select partner organizations that meet specific technical requirements, and anyone can download the code from GitHub and implement it on their own servers.
BadgeKit is a set of open, foundational tools to make the badging process easy. It includes tools to support the entire process, including badge design, creation, assessment and issuing, remixable badge templates, milestone badges to support leveling up, and much more. The tools are open source and have common interfaces to make it easy to build additional tools or customizations on top of the standard core, or to plug in other tools or systems.
From a design perspective, this milestone represents refinements in user research and testing, user experience, user interface and branding.
In preparation for this release, we conducted extensive user research to define the needs and goals for badge issuers. This work, led by Emily Goligoski, helped to define requirements for the BadgeKit offering as well as inform the user experience. The research was done using a variety of methodologies, however, it is worth noting that all of this work was done in the open. Emily organized distributed user testing in key markets such as New York, Chicago and Toronto to do everything from needs analysis to accessibility and functionality testing. The Open Badges weekly community calls were leveraged to pull in input from the highly motivated research and practitioner cohorts. Much of the work is documented both on her blog and in github. We paired every implementation milestone with some form of user testing and iteration. While this may sound obvious, it was a new way of working for our team, and I can unequivocally say that the product is better because of this practice. User research and testing did not happen in a bubble, but rather it became completed integrated with our design and implementation cycle. As a result, developers and designers became comfortable making informed iterations on the offering, as developers, designers and team researchers all participated in some form of user testing over the past three months.
|We did user testing with members of the Hive in Brooklyn.|
As a direct result of the extensive research and testing, the user experience for the entire BadgeKit offering was deeply refined. This work, led by Matthew Willse introduced some new features, such as badge “templates” which give the ability for any badge issuer to clone a badge template and remix it. This gives us the unique ability to offer template packages based on common badge requests from the community, as well as eventually to empower the large Open Badges ecosystem to develop badge templates of their own (and perhaps explicitly state how they are comfortable with their content being shared and remixed). One component of this work that evolved as a direct result of testing, was the increased attention to copy. Sue Smith led this work, which entailed everything from tool tip development and a glossary to API documentation. Considering that BadgeKit takes an issuer from badge definition
and visual design
to assessment and issuing,
designing the user experience was no small effort and the attention to detail combined with designing in the open - proved to be a solid approach for the team.
Perhaps the most obvious design component of this release is the user interface design and brand definition. Adil Kim kicked off this work with an exploration of the brand identity. BadgeKit is under the parent brand of OpenBadges, which sits under the even larger parent brand of Mozilla - which gave us the constraints of designing within the brand guidelines. After exploring options to represent the visual metaphor for this modular system, here is the new logo:
The logo is meant to evoke the imagery of both a badge as well as a tool in one glance. For the untrained craftsperson (ahem) - while gazing into the mark - you will see a bolt . This connotes that BadgeKit is a tool, something that allows you to dive into the details and construct a badge, and a system for your community. The logo incorporates the palette from Mozilla Open Badges, in a playful mobius - at once implying that while this is a handcrafted experience, it is also a seamless one. This logo nicely fits into the larger brand family while reading on it’s own, as if to say, “hey, BadgeKit is the offering for badge MAKERS, dive in and get your hands dirty!”
The brand is in turn extended to user interface design. The overall art direction here was that this is needs to be clean, yet approachable. We know that many organizations will not be using all of the components in the interface directly on badgekit.org, however, the design needs to take into account that everything needs to be accessible and read as remixable. Some details to note here are the simplified navigation, the palette and subtle details like the ability to zoom on hover over thumbnails.
It’s worth noting that while Emily, Matthew, Sue and Adil , as well as Carla, Meg, Erin, Jade, Sabrina Ng, Chloe and Sunny were invested in much of this design work, there was an intentional yet organic partnership with the developers (Zahra, Erik, Andrew, Chris, Mavis Ou, Mike and Brian + many, many community contributors) who were doing the implementation. We had weekly critiques of the work and often engaged in conversation about design as well as implementation on github.
And the good news is that design never ends! Design isn’t just a destination, it’s an invitation to a conversation. Check it out, let us know what’s working and importantly, what’s not.
[Source: Planet Mozilla]