Have modern programming languages failed? From the point of view of learnability and maintainability, yes! What would a truly maintainable and learnable programming language look like? This is the fifth of a six-part series exploring the future of programming languages (read The World’s Most Maintainable Programming Language: Part 1, The World’s Most Maintainable Programming Language: Part 2, The World’s Most Maintainable Programming Language: Part 3, The World’s Most Maintainable Programming Language: Part 4, and The World’s Mode Maintainable Programing Language: Conclusion).
Enforcing Good Programming Practices
Maintainability requires developer support.
Despite all of the design work done so far to make this language as learnable and maintainable as possible, asking imperfect humans to write even in a perfect language means that it is still possible for them to make mistakes. The language can only alleviate the problem. It’s probably impossible to work around malevolent coders in the language design alone. Within the compiler and other tools, however, anything is possible.
Here are several aspects of maintainability that the compiler tools can enforce.
Invariant Code Formatting
Inconsistency breeds unfamiliarity.
How many developers and projects and teams have argued for far too long over minutiae such as brace placement and indentation? Combatants claim that one style or another is superior for fitting n lines on a page or m characters on a line, or for deciphering complex expressions, or for static analysis. Certainly avoiding this argument is useful. Fortunately, this corresponds to the primary principle of maintainability: optimize the language for learnability.
By enforcing the best particular style of code formatting as a syntactic constraint within the language parser itself, the language tools can reject malformed programs before the ill-formatted source code enters a source code repository or escapes the developer’s IDE. A potential language slogan is “If it compiles, it’s obviously readable!”
This solves only part of the problem, however — that of sharing source code with teammates on a project. As long as untargetted legacy editors (like vi and emacs) drag development into the morass of backwards compatibility with dumb terminals (3270s are dead, people!), there can never be a single technical solution to prevent people from posting poorly-formatted code to the Internet. Perhaps one answer is to write a series of plugins for the most popular web browsers and e-mail clients to embed the language parser and syntax checker to catch (or even rewrite) formatting errors in public forums.
The less poorly-formatted code allowed, the fewer chances novices will have to see unreadable code. Certainly experienced developers will have little trouble deciphering what a misplaced brace (if the language even allows braces to denote scopes and nesting constructs), but consider the greater value of extreme consistency to novices: it makes understanding programs instinctual and, even, trivial.
Clear thinking promotes clarity.
One of the most overlooked disciplines of writing maintainable code is carefully considered symbol naming. A well-chosen symbol name may never be as useful as rigorous commenting according to a standard template, but it is a good secondary or tertiary hint as to the intent of a section of code in its context. A poorly selected name, on the other hand, is an inescapable primary source of misunderstanding and error.
As usual, the proper way to ensure that novices can understand even the largest and (supposedly) most complex programs with ease is for the compiler to enforce a small but comprehensive set of guidelines for symbol names. While natural language processing is still an imperfect science, approaching a subset of the problem makes it tractable.
For example, identifier lengths are important in inverse proportion to
the lifespan of the identifier within a section of code. That is, a short
name such as
idx is most appropriate for loop variables.
Through flow and lifespan analysis, the compiler can infer the Hamming
information density measure of a container, decide the most appropriate
minimum length of an identifier based on the usage and visibility of the
identifier, and reject symbolic names whose extents do not appropriately
reflect their significance.
Misspellings are embarrassing and difficult to explain; how many
erstwhile programmers tried their hands at CGI only to stare in shock at
HTTP_REFERRER environment being strangely unset? Through
standard grammar-analysis tools available to any text processing tool, the
compiler can catch simple spelling errors as well as subtler problems such
as homophonic confusion. For example, it could consult the WordNet database
and reject identifiers with excessive numbers of synonyms or senses, those
with none at all, or even just very obscure words. Even though the
automatic declaration of type inferred symbols manages variable lifespans
effectively, avoiding multi-symbol overloading in nested scopes, having an
extra safety check to prevent accidental letter transposition can only help
At a higher level, consistency in the form of identifiers is
immeasurably important. As the original intent of Hungarian notation
implied (and not its poorly understood and even more poorly practiced
half-breed descendent), the type of a value and its identifier must
at least be congruent. Function, procedure, and method identifiers all
follow a similar pattern of verb phrase-noun. The parser can
detect and reject identifiers that do not match these patterns. Even
better, the type inferencer can gather implicit semantic hints from these
names. For example, a variable named
running is obviously a
boolean — all gerunds are. Hence the compiler can automatically type check
such a variable -checked even in the absence of explicit type declarations
(ignoring for the moment that explicit typing can be antipathetical to both
learnability and maintainability).
These are merely symptoms of a larger issue, however. Sufficiently clear
code (and sufficient is a half-hearted goal here) uses identifiers
of precise and accurate semantic meaning within the problem domain.
Eschewing the use of false cognates and inappropriate metaphors will make
software suited to a particular class of problem much more approachable to
novices. Imagine a farm-control program using an explicit
microcontext-based event system. Referring to the master event control loop
kernel, no matter how tempting a pun with regard to
operating system theory, would be confusing if the system also monitored
sprinklers and gauges related to a cornfield. There is little literature on
the subject of domain-appropriate naming with regard to compilers, so there
is an opportunity to break new ground in this fertile subject area.
Though many languages aiming, to one degree or another, for ease of learnability have included some notion of cross-human-language compatibility, few have addressed this in terms of identifiers. Machine-aided translation is now cheap, easy, and accurate enough to be able to transcode source code between most human languages. Imagine the boon to internationalization, localization, and accessibility as well, with complete toolchain support for localized and appropriate text representations, not just in context-adaptive I/O operations, but in the very source code itself.
One outstanding problem in this area is how to deal with characters outside of the standard ASCII range. For pragmatic purposes, it may be acceptable to use Latin-1 encoding for the first few pre-release versions of the language. For the final released version, hopefully the new standard of language-independent character encoding based on the International Phonetic Alphabet (that is, Phoneticode) will have replaced Unicode.
Repetition is hazardous and troublesome. Eliminating repetition eliminates dangers.
Repeated (or, worse, near-repeated) code is dangerous. Eliminating repetitive code reduces hazards and troubles. This is a well-acknowledged fact published and cited throughout the reputable literature.
Having a comprehensive standard library helps alleviate this problem to some degree for many languages. Consider how PHP and Java have both avoided the trouble of conflicting addons by aggressively integrating new features into the language. (Of course, Python has done a similar job through a rigorous set of community procedures, but that’s difficult — though not impossible — to emulate in a parser and compiler.)
Having a standard library is not enough. Duplication does not only come in reusable components, but even as small as the statement, word, individual unit, atom, and expression levels. Duplication also scales up to the program level, where the element of reuse is not just a subroutine or class, but an entire class of programs.
If there is one and only one obvious way to solve a problem, why make two people stumble upon it on their own? Half of the solution to this duplication is to expand the standard library to include these elements of solutions as well.
The other half of the solution is to add rigorous duplication detection and removal measures within the compiler suite. For example, to prevent inexperienced developers from copying and pasting code found on the Internet and making voodoo changes, the compiler will consult a database of hashed code fingerprints and fail code too similar to known examples. Published code examples will be minimal; the proper way to produce effective programmers is through rigorous training classes, where instructors can enforce the recommended maintainability techniques. This also avoids some of the difficulties of enforced code formatting outside of the standard toolkit — on Usenet, for example, or web-based message boards.
As mentioned earlier, this also scales far smaller, further past the functional unit where most static analysis tools stop. Duplication is duplication even in terms of flow control constructs. The compiler will detect token-based similarity with an empirically-determined fuzz factor (probably 80% to start, but refined as the language approaches its final release), at or above which similarity level the compiler automatically refactors the duplicate line in the original source file(s), to remove the duplication and parameterize the remaining code.
Note that no other language has this feature, to refactor an entire codebase at compile time, producing human-language-localized and perfectly maintainable source code. Even if there were no other benefits, this would be an amazing point of value.
Finding and fixing errors before running a program is easier than finding and fixing them after deployment.
The shorter the amount of time between making a mistake and realizing it, the easier it is to learn from it. This is especially important when learning a language, where frustration is an immense problem. Undoubtedly all experienced programmers remember how it feels to fight the compiler and language to do something that seems so simple — perhaps that disheartening sensation causes many erstwhile programmers to have given up instead of to persevere. But even experienced programmers occasionally fall to this trap, wasting hours debugging unexpected side-effects of dynamic behaviour.
Minimizing this difficulty will make the language easier to use and to learn. One way of doing this is to promote solid, capable static analysis. The compiler should be able to exercise all code paths at compile time to warn about and fix errors as soon as possible. Coupled with the idea of a single, preferred development environment, it should be possible to perform incremental compilation and analysis. Perhaps the final releasd version of the language and platform will detect errors as developers type — a kind of semantic syntax highlighting — to shorten the feedback loop between making a mistake and correcting it.
The ultimate goal, of course, is a development environment that automatically refactors code as you type it. The obvious extension of this idea is to apply the symbol naming analysis code to determine the purpose of the code and then to correlate that with similarly existing code.
No Compiler or Platform-Specific Code
Portability prevents strange bugs.
Unfamiliar code and constructs are impediments to clarity and maintainability. It is therefore important that the language and its environment perform exactly the same on all supported platforms. Many other supposedly-portable languages allow the use of platform-specific extensions, but this is the nose of the camel in the tent; permitting developers to do the wrong thing means that they will do the wrong thing.
(Some find it disconcerting to note that apparent portability is more important in language and platform debates than actual portability. That is, it’s likely that the latest version of C# will run on such “exotic” platforms as Linux PPC — albeit thanks to Mono — far before the previously stable version of Java will. Thoughtful language designers must take into account actual portability.)
To forbid the use of platform and implementation specific quirks, there will, of course, be a total compatibility kit of test suites to prove that any specific implementations provide only exactly and no more the behavior specified in the specification. It is allowable to produce open source, free software, public domain, or proprietary implementations, of course, though to prevent confusion of course all implementations will have to prove themselves via the TCK.
Some developers raise the claim of performance, as if it were ever worthwhile to sacrifice maintainability for perceived speed. In this age where even Apple Computer has moved its boutique products to multi-gigahertz processors, sparing a few cycles here and there for compatibility can reduce confusion and, almost as importantly, odd bugs. (It’s amusing to note that Mac OS X, being essentially Unix, can run the open source DTrace application bundled with Solaris, allowing even easier tracing and optimization possibilities. Portability, not just of Mac OS X but of the language itself, solves its own problems.)
Forbidding the use of platform and implementation-specific quirks is good, but it can never ensure the absence of such things. Fortunately, there’s another technical solution the compiler and toolkit can provide, and that is the requirement that all valid code must compile to the same underlying platform representation on two separate implementations of the compiler running on two different platforms. Obviously this represents an opportunity for a new business service. Not only can an enterprising language designer with some seed capital build a compile farm to offer platform and implementation different verification services at reasonable prices, the same infrastructure can gather information on the compiled programs to fulfill some of the necessary capabilities of DRY reporting and dispersal.
It’s very satisfying for a language designer to solve two seemingly incongruent problems (especially those largely thought intractable by the greater language design community) with each other.
A Powerful Type System
Forbidding incorrect operations as early as possible prevents bugs.
A common source of errors in most programming languages is using data incorrectly. While many languages require type annotations to identify the expected uses and values and behaviors of all data, more powerful languages can infer this information from the code. The language should never require type annotations — while warning, at compile time or earlier, about the incorrect use of variables and data.
Some language advocates claim that the additional bookkeeping of defining and declaring types is too much work for dubious benefit. This may be true for short, non-essential programs used for two-minute tasks and arguing strawmen vociferously, but all real world programs must model the real world. A truly learnable programming language will exploit this real world experience. In the real world it is an error to put five pounds of potatoes in a ten pound sack, so why refuse to let the compiler and compiler tools detect this obvious mistake?
There is one class of problems to which this truly does not apply as written, and that is experimental modeling of non-Earth physics. In a space-time simulator that attempts to model the effects of different cosmological constants, it must be possible to recalibrate some of the aspects of the type checker. The same might be true of computer games, where a type checker so careful that it might refuse to allow an operation where a 180-pound character can carry 10,000 gold pieces might actually remove the aspect of fun from the game.
This is perhaps the most difficult place to balance correctness in its various forms. Perhaps it is a mistake, but allowing programmers to declare their own types and interactions may prove useful. Given a simple syntax and core algebra, it may be possible to make this also accessible to novices. Certainly anyone who has ever played “What if?” will find such hypothetical constructs understandable.
Correct code has fewer bugs.
Static analysis is good and a powerful type system is useful, but the real power of any programming language comes from its mathematical underpinnings. Thus the language must require that any program circumscribe and fulfill a mathematical proof of its correctness. It should be impossible to write incorrect code in this language — this also makes it much easier for novices to learn and maintain code as they will never introduce bugs into the system.
In practice, this has failed in most modern languages because their programs try to do too much. That is, the weight of proof-carrying annotations is too great when compared to the practical behavioral code. A better language will reduce this by enforcing the use of large programs built from small, already-correctness-proven components. With this problem already solved by other language features, proving the correctness of an arbitrary program is a simple job that perhaps even the compiler tools can perform.
A Single Development Environment
Foolish inconsistency is the hobgoblin of lazy minds.
The consistency of most programming languages suffers from overconfigurability. Within a single shop, developers can rarely agree upon a single text editor, let along tools such as memory checkers and debuggers. Pity the poor programmer who tries to work alongside a partner, where macros and shortcuts are different and nothing is the way it seems. Imagine trying to work in an office where the chair is at the wrong height or the screen is too low — yet isn’t trying to adjust to these artificial differences in tools just as annoying and unproductive?
The language will support one development environment, one set of key bindings, and only a standardized set of macros to prevent confusion and aid maintainability by reducing the amount of information a new developer must learn.
Of course, this means that the core language and library developers must be familiar with all of the contexts and domains in which programmers will use this code, but this is clearly easier than mastering all of the exponential combinations of more than a few core primitives. Put more directly, it’s easier to understand all of the ways in which people will use a language than all of the possible ways they can combine the fundamental elements of the language.
Another advantage, besides the obvious gains in interchangeability and learnability, is that focusing development efforts on a single set of tools can only improve them. Instead of spending time and effort and money reinventing the same programs in competition, developers and vendors can provide more value building out from a center point of collaboration. Many eyes will make all tools shallow.