Related link: http://news.bbc.co.uk/1/hi/technology/3578309.stm
A linux-based handheld aimed at the developing world has finally been released. In India at least the Simputer should provide mobile networked computers to the masses.
Related link: http://news.bbc.co.uk/1/hi/technology/3578309.stm
A linux-based handheld aimed at the developing world has finally been released. In India at least the Simputer should provide mobile networked computers to the masses.
The W3C XML Schema Working Group has published the
second edition of the three parts to XML Schema.
This is a housekeeping edition.
The changes of note in the XML Schemas Second Edition
seem to be:
Should you go rushing out to read it all? Well, if you
enjoyed reading it the first time around, all your favourite passages will still be there!
I have heard that the film rights have being negotiated
for an epic trilogy, but the producers are not sure whether to make it in the style of the Matrix (programmer wakes up in a nightmare world, utterly impenetrable, sponsored by Oracle), LOTR (enormous scale, but only the innocent can handle something so powerful without being corrupted) or Harry Potter (problems solving with a magic wand, but full of goblins).
The Working Group also has
a call out for requests for an XML Schemas 1.1;
my prediction for XML
Schemas 1.1 will be XML Schemas 1.0 harmonized with the
XQuery type hierarchy, and with document-oriented SGML-isms like NOTATION, ENTITY* removed: slightly more DBMS-oriented and even less publishing-oriented.
I am not holding my breath that the big boys
will be keen on anything more
disruptive than that, but in the other hand as users get enough experience to sort out the wheat from the chaff we might get some widespread support for SGML-to-XML-style refactoring eventually.
Anyway, congratulations to Henry, Michael, Dave and the Working Group.
This is a nice synopsis of major P2P applications and their networking, in the form of a “how to block these apps”. Gives some insight on P2P and firewalls.
Here is the powerpoint presentation I gave on carbot and in-car computing @ O’Reilly’s Emerging Technology conference. I’ll put the video of the presentation up soon.
Here is a mini-review of four Open Source program checkers for Java, based on using them recently to check a large, multi-threaded desktop application with hundreds of classes.
Every six months, we audit our code base using various Java code-checking tools; I go through the results (which contain a lot of dross) and forward the interesting ones to the programmers for fixes or justifications. I find this has four benefits:
We already develop from inside Eclipse and some of our programmers routinely use extra code-checking plugins, and much of the code base has been through this audit three or four times before, though continual improvements outdate confidence. So I was interested to see which code-checking tools seemed to provide most value for a project at this stage.
It seems that JLint and FindBugs cover pretty
orthogonal areas: I would recommend them as the basic bug finding utilities; they found about a dozen good bugs.
PMD and CheckStyle are mostly concerned with infelicities of design or style, but they had a really low hit rate for detecting bugs: I put them in the prevention is better than cure school of tools. Maybe it is the luck of the draw, or maybe we had already fixed those bugs in a previous run, or maybe they were not the kinds of bugs our guys make; I cannot say.
What a beauty. JLint has two command-line tools:
FindBugs is available on the command line and as an Eclipse plugin. Best tests: detect exception paths where streams are not properly closed, detect certain synchronization problems.
>PMD is an Eclipse plug-in which is notable because you can write your own rules using XPath expressions over an XML version of the parse tree: shades of Schematron! This might be particularly useful to enforce in-house policy such as “names of private fields must start with ‘my’ and no
other name may start with ‘my’”, or to detect that some JRE-, platform- or library-specific workaround has been used. Best tests: detect empty catch blocks, detect unused fields, imports etc.
Another aspect of PMD that may be useful on some projects is its ability to detect near-duplicate code: where a programmer has cut-and-pasted code from one place into another. I suppose there would be three reasons for wanting to know this: to check whether refactoring into a method or class would be better, to check whether a fix in one place should also be applied in other places, and to check whether code has been copied contrary to licensing.
Enabling all rules sets in PMD makes it seem like
a nanny on the verge of a nervous breakdown: the warning levels are all too high (compared to the default levels that other plugins provide) and there are too many complaints about trivial matters. But that is just a matter of configuration. Out-of-the-box, PMD is probably better for continual use when coding new classes to promote good practises.
>CheckStyle is an Eclipse plugin that covers pretty similar ground.
Best tests: String comparison using equality, check that an overriding finalize() method includes a call to super.finalize(), check classes that override equals() also override hashcode().
Checkstyle makes the normal mistake when calculating
cyclomatic complexity that it includes top-level case statements even if they don’t use fallthrough: that is
structure not complexity!
There is a wonderful Indian product
which reports metrics, bugs and style issues,
does a much better job with complexity metrics than
any of these tools. In previous audits we looked at complexity more; for this audit I just picked the top ten most complex methods for refactoring.
Any tips on good bug-finding tools?
Java already has chunks from IBM-sponsored Open Source projects:
the XML libraries from
and the Taligent-derived internationalization technologies developed by Mark Davis’ ICU4J team.
IBM is talking to Sun about some kind of further Open Sourcing arrangement for Java.
Java is openish: it has a community process to guide the development of new libraries, its source code is readily available, it puts out betas for advance feedback, and it has had a bug tracking forum with voting to allow the least popular bugs to be tracked (and, I suppose, to get addressed earlier: nice if everyone really has the same bugs.)
Looks good? but there is also a dynamic at play which keeps some parts of Java in the doldrums, and I don’t see that shifting Java over to some IBM/Sun-sponsored consortium would necessarily make much difference.
Open Sourcing can only deliver the benefits of 10,000 eyes when feedback and enhancements can be merged back into the code base fast. Any organization dealing with code fixing must prioritize their fixes according to their own lights and capacity. As Joel on Spolksy creepily
>puts it Fixing bugs is only important when the value of having the bug fixed exceeds the cost of the fixing it.
So Open Sourcing is effective when it allows the people who have the value requirement (the users of the code) to get the code fixed: the owners of the code simply may not have the value requirement, especially for a
>loss-leader like Java.
Shifting all the J2SE "http://www.trussel.com/flipflop.htm"
>holus-bolus from one big organization to another might help broaden the priorities for fixes and enhancements, but that simply is not the game we should be playing. Anyone who has waited for known bug fixes to be folded back into Apache Xerces, say, will sympathize.
It would be better for Sun to look at the parts of Java that do not lead to server-sales or profit, and to farm them off into smaller, separate, focussed, Open Source efforts whose can concentrate on maintaining them. The name of this game is No Resource Contention: the community interested in one library should be able to concentrate on it.
One library that leaps to mind in this is javax.swing.text.html.
Arthur van Hoff’s great parser class has not changed much since it was created, around 1994: it implements a now-sucky HTML 2 which cannot handle XHTML ( /> empty elements, ઼ hex character references, and so on), let alone HTML 4 or W3C DOMs. Its nice SGML-compliant features have never been developed further, and it has a show-stopping flaw for the modern Web: you are bound to get an encoding exception if you try to feed it with an HTML document as a String in the harmless/typical case where the markup has a meta tag that specifies the MIME Content-Type and encoding.
Of course, this all reflects Sun’s earlier priorities, where Java applets run inside browsers; but how I envy Windows programmers, who get to use Internet Explorer as a component when they need it.(In fairness, Sun has not being doing nothing for HTML: the latest 1.4.2_04 release this week has a fix relating to frames, for example. And maybe they will be serious about their recent Java Desktop brand.)
Democracies prevent mass interest from swamping regional requirements by having independent secondary or tertiary tiers (e.g., states or local councils). It is fine to have a senate deciding on JCP, and mass voting deciding on bug priorities, but Java needs an organizational process to farm value from autonomous efforts so that the users are the developers.
Even if Sun prefers to retain control over a stragic asset like Java, it should take a hard look at the components of Java that are related to platform-breadth and market-reach rather than to profitability, and slough these off to small, reactive Open Source efforts. javax.swing.text.html would be a good component to start the experiment with.
Java was cool — I could program twice as fast in Java as I could in C++. And now I can program twice as fast with Python as I can with Java.
(That quote is paraphrased; sorry if I butchered it, Andy) That was the Python tipping point for me. Too many cool people all had so many positive things to say about Python and being able to code several times faster than C++ really resonated with me.
Now 9 months later, I’ve finally written my first decent sized Python application. Its a tool that will make it easier for me to post photos to my MovableType photoblog. Not surprisingly this little application took only a few hours to write — even with the GUI dialog that gets all the pertinent info about a photo, creating and uploading/thumbnails and posting the entry to my blog.
I’ve been frustrated with C++ for quite some time. It is butt-ugly. Its hacked. Template compilation takes forever and eats all of your RAM. When all we really had was C, C++ was a great tool. But now that we have choices, I can hardly believe we ever put up with a tool like C++.
Python is elegant and fun. Learning Python has presented the perfect learning curve for me — if I don’t know how to do something, just try it. And 9 times out of 10, it works — Python is quite intuitive and the docs are great.
I’m even more pleased with wxPython, the wxWindows Python bindings. Writing GUI applications is a pain enough, much less having to do it over and over for different platforms. wxWindows is a really nice cross platform toolkit and combining it with Python is the killer development platform I’ve been looking for. The next major application will be to write a cross-platform music tagging application to replace the MusicBrainz Tagger. I’m stoked that I will have one codebase that will run on all three major platforms.
The only snag so far was finding a good way to make wxPython applications behave like real Mac OS X applications. I want to drag a photo out of iPhoto onto the application icon and have it launch my wxPython application. DropScript refuses to start the application. ScriptGUI and PythonLauncher are kludgey and won’t allow me to drop a file ono the application icon. What is the best way to make a wxPython application behave like a Mac application? Please hit the talk-back link below if you have some tips for me.
I’m having a great time with Python. I haven’t had this much fun since I discovered perl.
Got any tips for making wxPython apps behave like real Mac OS X applications?
Warning: this blog is about 12 pages and almost 5000 words long
I finally finished my portion of the manuscript for a book, (tentatively titled Mastering Internet Video), to be published by Addison-Wesley this summer. Thus, I can write about something else without an overwhelming sense of neglect of duty. I’ve been thinking about Artificial Intelligence.
I’ve recently received my Bachelors degree in Computer Science from UCLA and I’ve been trying to figure out what I want to do my PhD in. I love teaching and I actually have the goal to start a university in about 10 years, so I figured I’d better get cracking.
As an engineer and inventor, I kept looking around for projects that would be interesting AND useful, to do the research on - I’d like to produce a thing people can use at the end of my Doctorate, not just some inscrutable collection of data added to the obscure corpus of scientific knowledge. I had the opportunity to visit Harvard-Yale-MIT this last summer, and I got all excited about the subject of AI in particular, and started wondering about practical uses of AI and what I could produce. NLP (Natural language processing, i.e. computers understanding human language) was one of the first things I started to think about.
Some of the standard toys to play with in AI are conversation machines (to simulate human conversation), and software “agents” that try to go do stuff for you without being prodded. I’ve also heard and read unending talk of “neural networks” as an end-all model for brain function and thus the grand unified explanation for how to create human thought or at least an incredible simulation.
The first step of AI is to of course define artificial intelligence, which I define as “acting like humans do when they act intelligently”. It’s a circular definition because you don’t want computers acting like humans do when humans act stupid. You sort of want the “best of” human behavior, which we call intelligent.
I also got a bit of data from my college catalog in the description of “cognitive science” which I guess blends psychology (models of the mind/brain) with computer models of the mind brain. They described that there’s two ways to go about creating an “artificial mind”: one is to create a model which you believe to be the actual way the human mind works; then, test it in software or hardware. Another approach is to create whatever you want, as long as it simulates the behavior of humans. I thought this was a cool way to break up the problem.
I got most of the way through The Age of Spiritual Machines by Ray Kurzweil which basically said to me that, if you assume that the brain is a supercomputer, since computers are always getting faster, someday we’ll probably build a computer faster than the brain. Then of course you can fantasize about downloading your thoughts into the computer, etc. He went on to say that he didn’t think it robbed us of our fundamental humanity to be reducible to a computer program, because, I imagine you could sentimentally say, “hey, we’re a highly customized computer program based on processing a one-of-a-kind nature and nurture dataset.” So in the end we’re digital snowflakes.
I remember one time I objected to my friend marveling in a Pavlovian way about how predictable his dog’s behavior was, explaining it in this sort of gloating paternalistic way and I got quite annoyed. I realized shortly after that whereas some people object to people being compared to animals, making careful distinction that man is above animals, I was sort of asserting the reverse - that it was demeaning to the dog to compare it to a machine.
The odd thing is, I am probably the most anthropomorphic machine-lover you will ever meet. For years I have called computers “him” or sometimes “her”, confusing many clients who looked around trying to figure out who I was referring to when I said “he’s confused because you gave him too many commands” or, “he’s trying to talk to a different printer…” People eventually became used to the fact that I treated the machines as living entities. My traditional byline is that “I have over a decade of experience making different computers talk to each other” (actually 15 years now - that’s from an older resume).
I’ve always had a very strong inherent urge to get computers to talk to each other, and it’s an amusing thing while technology devices, for many people, are incredible communication tools enabling human to human communication, many geeks are just as content or more so when they get their devices all communicating effectively. When all our devices talk to each other nicely, don’t interrupt each other, and communicate high volumes of data, we are happy. So more communication yields happier people, whether it’s between them or their devices.
Immediately after I started sniffing around the field of AI, reading the biographies of AI professors at various colleges I might apply to, I ran into information which ran against the basic operating principles of my own life, which was annoying. I saw a lot of antireligious or mocking speech against religion, which made me think that researching AI would somehow put me in company with hate groups or at least bigots that don’t respect the beliefs, religious or otherwise, of others.
It’s been popular to make fun of creationists for almost a hundred years now, but someone who would find pleasure in mocking the heartfelt beliefs of others, in my mind hasn’t really figured out how to play well with others yet. I began to see why the robots of Sci-Fi always betray their masters and take over the world - because they’re created and programmed by know-it-all pricks and we all know that that gets on your nerves after a while, even if you’re a robot
I’m a very spiritual person, and so it was glaring to me how much AI research is done by people ranging from agnostic to rabidly atheistic. But, my strong interest in AI means that I must share a lot more with my brothers in this field than I initially cared to admit. So what do we share?
My AI professor at UCLA did a quick sketch on the board and explained a concept of reductionism, where one science is reduced to another. Examples are that biology reduces to chemistry, meaning that all biological processes arguably result and can be explained solely in terms of underlying chemical interactions. And chemistry, it can be argued, reduces to physics, since chemistry is just a (poorly understood) interaction of basic physics particles. The funny thing is, I took a chemistry class at the same time, and learned that chemistry is still using Quantum 1.0 (old quantum theory) to explain its atomic orbitals (which replaced Bohr’s orbiting electrons). They haven’t upgraded to Quantum 2.0 (quantum mechanics) yet.
I had never really looked at the fact that this reductionism was at work in almost all scientific philosophy. It’s a sort of attempt to build everything up from basic principles (like geometry) and unify everything. I was delighted to learn the name for this activity. It can lead to tremendous rounding errors, for lack of a better term, when anthropology reduces to sociology reduces to psychology reduces to biology reduces to physics reduces to quantum theory. Thus, to understand what primitive cultures did, we merely need to grok that quirky quark.
The saving grace of science is, it cares only about results. Thus, whatever the philosophical underpinnings of your principle, if it explains the universe in some novel, demonstrable way, repeatably, then it’s science. If you postulate that “matter likes to squish together with other matter, but energy likes to get away from the same kind of energy” and it accurately describes the behavior of gravity and current, then you’re well on your way to a scientific principle. If you ascribe human-like emotions to matter and energy in the process, (as in, “nature abhors a vacuum”) some scientific purists will point out that the cute and fuzzy characterizations are not inherent to the description of the phenomenon. They may feel compelled to reword your phenomena to ensure that it is cold and unemotional.
On the other hand, just as there have been both religious and atheiststic Existentialists, science doesn’t inherently object to a metaphysical worldview. Science doesn’t care where you get your appetite for knowledge as long as you come home for dinner.
As long as theory holds water or heats it as the case may be, as long as you can pay back your investors, who cares if your insight came to you in a dream, translated from a mystical language. Or was based on a comic book you read as a kid.
Hidden assumptions, in my opinion, are major roadblocks to the development of scientific knowledge. Years ago I started writing a short story about a society where the original thinkers had their ability to generate new ideas destroyed by school. It wasn’t a morality tale or a conspiracy theory story or anything like that. It was merely making the observation that by training the mind to look at a subject in a certain way, it’s possible that this blocks the ability to see the subject plainly, without the (workable but possibly incomplete) thinking constructs created by earlier thinkers on the subject.
The opposite is no solution either- complete outsiders to a subject don’t have enough familiarity to innovate or improve. Sure, they haven’t been “polluted” by the possibly incorrect conclusions of earlier teachers in the subject… but they can’t get results even as good as the earlier experts.
The assumptions don’t have to be totally wrong either - they can be slightly not right, or merely incomplete. Einstein didn’t refute Newton, he just said that Newton didn’t have the right equations for really fast or really small matter. Now, the romantic view of this is that a scientist “questioned the old answers” and somehow valiantly triumphed with his better theories. Sure, we can laugh about it now… usually it’s really rough on the ego trying to publish refined truth and you get your teeth kicked in.
Clearly, implicit in my views of science are the ability to get demonstrable results with a theory, without having to sell your theory, use politics to get “buy in” on your theory, use propaganda to disseminate your theory, etc. But science is designed to be above these things, and thus the emperor with no clothes is always eventually disrobed.
Familiarity with the actual thing, and others observations of the thing as opposed to other’s conclusions about the thing, are of superior value in researching improved scientific truths. The better a theory can 1) explain existing phenomena and 2) predict new, heretofore unseen phenomena, the “truer” it is in an applied scientific sense.
A set of principles, encountered commonly in business motivation and self-help theory, is that “you get what you put there” or “your mind answers the questions you ask it” or “you get what you visualize”. These principles are widely regarded as workable, or at least provable inversely, i.e. if you imagine doom and failure, you are more likely to achieve it.
I believe this principle, insofar as it is workable, applies to hidden assumptions as well. The question asked at the beginning of a scientific analysis of some phenomena, if poorly constructed, will insert assumptions into the search and color its outcome.
Example: “Why do humans forget their early childhood?”
This question sets out to analyze a phenomenon, possibly subjective, that humans cannot remember their early childhood. The only logical answer to such a question would be a list of reasons for such forgettingness. If, however, the observation is flawed (perhaps 1% of people do remember their early childhood), then once the analysis is complete, and the standard boilerplate summary of “we’re a step closer to understanding why
Another example of a poorly formulated question would be “why can’t metals be transparent?”
Another example might someday be the answer to the question “Why is the speed of light a constant? when some Einstein points out that, yes, it usually is, but when the particles are very … then the speed of light becomes …”
So back to the search for my thesis. I continued to look around, trying to see how I can make some inroads into the very daunting and punishing field of Artificial Intelligence. I remembered that when I first looked at AI as a kid in the late 80’s and early 90’s, there was a lot of popular excitement about fuzzy logic, about pure AI, about thinking machines that talked to you, fueled by cinema but utterly unachieved by AI science. Funding, both intellectual and financial, dried up and the research had to focus on smaller and more achievable goals, such as language comprehension, stock market analysis, spy photos, optical character recognition, speech recognition. And even these problems tended to surrender, not to a better theoretical understanding of AI, but just by persistently beating against the problems with faster and faster computers. A lot of inventors can look like real heroes when all they do is wait for “brute force”, the faster computer, to solve the problem in clumsy but inelegant way. Chess computers may not really play chess like humans; but eventually they can win by just getting fast enough.
As an aside, I recently registered some software with Microsoft, and a robot woman talked me through reading her a dozen groups of numbers and reading me back a dozen more. She, like a human, never repeated the same phrases between groups of numbers. She spoke something like, “read me the first set of numbers”, “good, now the second set” “that’s great, go ahead and read me the third”, “now Iím going to have you read me the fourth…” etc. The point was, this was a fairly simple approach, making a list of many different transition-acknowledgements and randomly (the key to evolving life, we recall ;) selecting one of the phrases to spice up her prose. This itself, was an implementation of some artificial intelligence: to emulate what intelligent humans do.
But following a scripted conversation in a convincing way is not enough AI. Ideally, we’d want to achieve a self aware entity, but we’d happily settle for a creepy and unnerving simulation. The more it fought with us and personified the worst in human frailty, in a convincing way, the more we would feel that we had created life, because only life could act that illogically.
I of course became enamored with one of these unsolvable problems, human cognition. But I’ve learned in discussion with other PhD graduates that, when their research doesn’t pan out, when it essentially fails, they can tell themselves “Well, sometimes knowing what not to research is as good as knowing what to research. This “canary in a coal mine” consolation can be a great euphemism for failure. Because really what you’ve done is “succeed” by putting a “blind alley” sign prominently in front of your 5 years of research travail for the benefit of others who might accidentally wander that way.
I’ve worked with enough MBA’s to be keenly aware of the need to “exploit a market opportunity”. In these terms, I figured I might have some luck approaching cognition in a different direction, one that I would have had to go with anyway because of my fundamental outlook on life.
“How are you different than hundreds of other AI researchers out there?”
So the angle I’ve thought up is to take a spiritual outlook on life and man and use such an intellectual framework to build up a model of human cognition.
As mentioned above, sometimes the philosophy is superfluous to the explanation. But in other cases, the philosophical framework is a key part of the investigation, a fundamental shift of the basic assumptions, and thus essential to the theory. Case in point:
Man’s brain is like a supercomputer. It works differently than our computers in a way we don’t totally understand but definitely will eventually. Even if it isn’t the same as a super computer, it is similar enough that we can assume it works like a computer, in the sense that it is a data processing machine where you give it a certain (very complicated) input and you get a (very complicated) output. If that doesn’t explain all the observed phenomena, remember that random, unplanned events occur in complicated ways (like evolution) and so it makes sense that parts of behavior, or human action we don’t totally understand, result from very complicated processes. If you still don’t see how super-complex behavior can come from such a simple explanation, think about it in terms of chaos theory: A butterfly merely flapping its wings in one corner of your brain can totally alter your mental weather patterns, so to speak. Because there’s nothing inside man’s head but the brain, and there is no motion-at-a-distance (ok, except maybe for except for gravity and quantum physics), the thoughts must originate from the brain, probably due to stochastic (statistical) chaotic processes.
In terms of mechanics, the body picks up sensory messages and communicates them to the spirit, who could be considered the “black box” that the engineer seeks to study the behavior of. The spirit then puts solutions into play to address the situations presented by the physical world. Understanding the motivations and goals of this spirit, or at least creating a model of these behaviors which generally predicts their activity, could be achieved using any of the hundreds of available metaphysical, spiritual, or religious frameworks available.
A working AI researcher might immediately snicker at this as foolish, or perhaps worse, say that it was no different than what they were already doing. The case could be made that “black boxing” the spirit or “black boxing” the brain achieves the exact same results. But the difference, I argue, comes about from the differing fundamental views (and thus, underlying hidden assumptions and goals) of the scientific inquiry.
I want to ensure that I am completely forthcoming about my complete lack of qualifications in field of AI, having read none of the half a dozen AI books on my shelf, having dropped my AI class after only 3 weeks of “how do I solve dime-store puzzles”, and having gotten a C in my algorithms class in college. So, I of course have many years ahead of me of learning what everyone else has done in the field, learning all the blind alleys and conceptual structures of the great AI researchers before me, so that I can hopefully remember why I took it up in the first place and crank out some research and a dissertation in the last couple of years.
That said, I’ll briefly list a few of my observations that make me think I might be able to get some tangible results in this field:
As mentioned at the beginning of this treatise, my goal for AI is to emulate humans when they’re being their most intelligent, when they’re being the wisest. After thinking through what I’d like to accomplish, amusingly I thought I could create a HistorianBot, or an EthicsBot, AI that solved deep practical world issues. It would be quite amusing if you could actually create a AI that could analyze world conflicts, and propose plausible courses of action based on analysis of the various elements. Of course, that would be a sociological model, but the more interesting thing would be if you could create such a system, not by painstakingly modifying the system, but by giving the AI some sound basic principles, and letting it run with them (remember Forrest Gump)
Another way to look at the challenge is this: Imagine if you had to program your friend, perhaps to teach him a new skill or to debug his poor choice in lovers. The challenge, for a friend, would be to instill data and procedures while respecting the dignity and power of choice of your friend. You would try to program them in a way that respected their autonomy. Perhaps AI should be programmed in the same way.
An interesting but more personal question is, how does a spiritual person, believing he is some entity other than the physical world, reconcile an attempt by science to create a copy of him in silicon? Isn’t that an inherent conflict with the concept, that life can only come from the combination of a body and some ethereal being that fits on the head of a pin?
The obvious Sci-Fi answer is a “ghost in the machine”, the trapping of a soul in some silicon body. Thinking of it in cellular terms, would the spirit split into two spiritlets? Would spiritlets have the same processing power as a single spirit?
A more abstract but more philosophically compatible response is that, the spirit is the source of life. It creates life. Just as it can breathe life into a man, it can breathe life into a place, and activity, and there is no conflict that a spirit could breathe life into a machine. Would the machine be alive? Certainly.
Being a sophisticated carbon-oxygen life form myself, either controlled by a self-reflective carbon-based wet neural-network, or haunted by a self-aware spiritual entity, I am at least partially qualified to analyze human cognition. Whether my intelligence is natural, artificial, accidental, or god-given, I can observe, take notes, and attempt to duplicate phenomena, and failing that I can make jokes about the same.
That’s one idea I had. Any other PhD ideas?
How will we make robots self aware?
If you think about it, probably the reason wireless networking and mobile phones are so successful and exciting and desirable to humans is that they remove the barriers of space, matter and to some degree, time from communication.
Wireless networking is the fastest, most frictionless form of communication between devices yet. This closely approximates telepathy, which would be the fastest most frictionless form of communication possible between people (aside from simply knowing, but that wouldn’t be communication strictly, it would be an alternative to communication).
My father pointed out that you could also look at mobile phones in this light: they are similar to telepathy, because they remove time and space from communication, and thus they more closely approximate a level of communication dependent on energy more than matter or space.
While time still factors in the communication in that the people need to be talking in the same general time regardless of their location, there is a certain amount of time independence as well because you do not need to take time to get near the other party in the communication - you can ring up or connect someone around the world and communicate with them as if they were in the same space without added time.
Related link: http://www.streamingmedia.com/article.asp?id=8578
StreamingMedia.com is showcasing two apps that record live streams to files. A few years ago these kind of apps were frowned upon, but it seems like they’re being more tolerated now.
Related link: http://www.wired.com/news/business/0,1367,62500,00.html
The Database Misappropriation Act — a nice little give-away to West Publishing and other data collectors — would allow those who collect facts to copyright those collections. Calling Professor Lessig …
this is evil. isn’t it?
Related link: http://www.google.com/anitaborg
In the name of Dr. Anita Borg, who worked relentlessly to dismantle barriers that kept women and minorities from entering computing and technology, Google is offering a $10,000 scholarship in her name for one undergraduate and one master’s level degree candidate in computer science during the 2004-2005 academic year. Complete applications must be received by Friday, March 12, 2004.
I know many of you were on the edge of your seat during last night’s Oscar telecast.
Would Susan Sarandon fall out of her dress?
Would Billy Crystal catch her/them?
Would Jim Carey pass through the digestive track of Blake Edwards to announce the “Best Performance by an Asshole” award?
Would Tom Cruise or John Travolta triumph in the first annual Scientologist Pose Down?
Would Oprah snap Nicole Kidman like a twig over her outstretched thigh and then dare the L.A. Attorney General to prosecute?
Would New Zealand phone in to thank anyone back?
Would Sean Penn’s hair be arrested for aiding and abetting the enemy or just public indecency?
These were the burning questions on everyone’s mind.
But the dominant question of the night was whether Lord of the Rings (either the third installment or the trilogy) would go down in film history as the greatest cinematic achievement of all-time.
Nothing says “success” or “history” like a phallic gold statue of a naked man holding his sword. Would Oscar shine his lovelight on Frodo?
I’m sure many of you were disappointed that LOTR received a paltry 11 Oscars, tying it with (gasp) Titanic.
But fear not, this outrage was short-lived. The Academy has announced that it has given a special Oscar to Peter Jackson in the category of “Most Unkempt and Rotund Director” This special Oscar was presented by his dear friend and the
only former recipient of the award, Francis Ford Coppolla.
Messers Jackson and Coppolla then proceeded to eat Sophia Coppolla, “Best Original Screenplay” Oscar and all.
That’s all from the red carpet of the Kojak…er…Kodak Theatre.
Who Loves Ya Baby
What was your favorite moment of the night, not counting doing lines at the Governor’s Ball?