As the leaves begin to change in the Hudson Valley each year, Bard’s Hannah Arendt Center puts on a fantastic, colorful and current conference, and this year was no exception. The title of this year’s two-day event was Failing Fast: The Educated Citizen in Crisis. I was not able to see all of the talks, but the sessions I attended failed to disappoint. From Stanford computer scientist and Coursera founder, Andrew Ng speaking about the potential of the MOOC’s to John Seery from Pomona College speaking about the virtues of a liberal arts education, a large range of views were put forward. Check out the conference schedule for the other speakers.
The organizers of this conference go out of their way to invite intellectuals from across the spectrum to share their views: scientists and writers, conservatives and liberals, entrepreneurs and academics. All were invited to share their views about the future of education. But like past meetings with such impressively mixed audiences, many of the messages get lost, attenuated or misconstrued as they travel between the vast intellectual space between the participants.
Drones have captured the public attention and discourse in a surprising way. And although the current left-right political coalition against drones in the U.S. is rare, this is not what surprises me the most. The most surprising aspect to me is the mismatch between the public’s technological perceptions and expectations of drones compared with their technological reality. Computing, Algorithms, and AI are to blame for lots of things: putting chess masters out of business, crashing satellites, overdosing cancer patients, electronic trading trouble, reading our email, judging our credit. But targeted assassination? Are the algorithms really to blame here? Read more
Making robotics accessible has been a career-long goal of mine. For many of the themes of this blog: all citizens should be empowered to use robotics (and computing) technology in their own work, and the fields of robotics and computing need to be informed by a diverse set of contributors. My class on Drones this Fall will continue this track of teaching and research.
Starting with some of my earliest work on using open-source software with robotics and then later with my work with IPRE on using robots in CS-1, I have been on a mission to make robotics more open, transparent and adaptable. My latest effort in this vein is embodied by the Calico Project. Calico is a learning environment for computing particularly suited for robotics.
Calico is about choice: choice of operating system, choice of programming language, and choice of programming context. The particular operating system (e.g., Windows, Mac OS X, and Linux) or programming language (e.g., Java, C#, Scheme, Python, Ruby) should not limit your pedagogical mission. Although Calico started as a way to easily program the IPRE Scribbler robot in Python on all three platforms it has evolved into a system that allows students to explore a variety of computing contexts (e.g. Processing-inspired graphics) with a variety of languages (e.g. scheme). We have extended Calico to work with other robot platforms like the Lego NXT and the Finch. Our next step, in the spirit of Pyro, is to use Calico as a front-end to the Robot Operating System (ROS).
Last week, we had a Calico summer research meet-up at Sarah Lawrence College. Part of this meeting was devoted to understanding how we could use ROS with Calico. A prototype system was presented that is able to control both the simple ROS turtle simulation, but also the Stage robot simulation, the iRobot Create, and the Parrot AR Drone. We are in the process of flushing out this interface and writing a proposal, stay tuned!
I teach two introductory computing classes at Bard: one using Python (using IPRE’s Calico and robots) and the other with Processing. Both programming environments could be better by borrowing ideas from the other. And by better, I mean a lower floor, making it easier for newcomers to programming; and a higher ceiling, making the tool useful after CS1. Rather than concentrating exclusively on one tool, I am continuing to attack the problem on both fronts.
This post is focused on making Processing better for introductory courses; Calico is next.
My first attempt is a simple tool called sp5repl, a small layer around Scala and Processing that allows you to write Processing sketches dynamically using an interactive read-eval-loop. The code entered into the Scala REPL is actually compiled, thus it runs at full speed; we get most of the flexibility of Jython and Clojure/Quil with the speed and error checking of Scala. A small example that generates the image below:
sp5repl>fill(196, 128, 64)
sp5repl>ellipse(width/2, height/2, 150, 150)
sp5repl>fill(64, 128, 196)
sp5repl>for (i < - width to 0 by -1) ellipse(random(i), i, i/20, i/20)
What I like most about Keith’s post is that it asks precisely the right questions about how to best use an emerging medium in public outreach and education. Too often we can become fixated on new technology and try to use it as much as possible because it’s new, or because it seems able to fit (albeit often awkwardly) into an existing paradigm. Instead we should be thinking about how these technologies work, how they are changing our social habits, and develop our uses of them appropriately.
Frontispiece to A Pretty Little Pocket Book
Children’s media has been trying to provide “instruction with delight” since we’ve had media content designated specifically for children. The celebrated children’s publisher John Newbery (called the “Father of Children’s Literature”) promised just this combination in A Little Pretty Pocket Book in 1744. But every medium has different properties, so understanding what kinds of delight a medium can afford is crucial to being able to make it educationally effective. Read more
If we were creating Sesame Street from scratch in 2012 would it use Scratch? A Scratch-based Facebook? A Pre-school-MOOC? If we wanted to create a large publicly-funded 21st century education equalizer — what would that look like?
“…provides students with the historical context, theoretical background, and analytical and technical skills needed to engage productively with new forms of humanistic inquiry in our digital age”
The concentration emphasizes the need to think critically in many modes at once, e.g. text, film, and digital media. As my first blog post indicates, my participation stems from an interest in promoting digital literacy and reforming our “read-only” digital culture, as Larry Lessig might put it. Ultimately, we hope literacy in experimental media can push the boundary in terms of thinking and education.
Brett Victor recently posted an intriguing essay entitled Learnable Programming. I’ve been a fan of many of his essays (e.g.,  and ). Victor is quite a pioneer of what we might refer to as design-based computing pedagogy as opposed to a programming or assessment driven pedagogy. And this essay surely lives up to his previous work; I think I could write fifty blog posts reacting to this single essay. T-49 …
“Science is knowledge which we understand so well that we can teach it to a computer; and if we don’t fully understand something, it is an art to deal with it. Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.”
—Computer Programming as an Art, Donald Knuth, CACM, December 1974.
Why aren’t computer scientists better teachers? Or rather, why aren’t computer scientists the best teachers? We are very good — pros, in fact — at explaining things in absolute detail to things that know absolutely nothing. If we can explain quicksort to a RISC machine, we should surely be able to teach binary-search to teenagers, right?
But of course we don’t teach our computers. And we don’t program our students. Or do we?
Last Thursday, one of our Live Arts Bard artists-in-residence, Annie Dorsen, gave us a behind-the-scenes view of her latest production: False Peach. The performance explores the notion of consciousness, agency, and language through an algorithmic production of Hamlet. Although still in its infancy, the play will use automated dialogue and lighting to recreate the famous play in an experimental fashion. I had the chance to attend her presentation of her piece Hello, Hi There at AAAI-2012 in Toronto this past summer. Hello, Hi There will be performed at Bard in November.
The moral agency of robots, or software-bots, has been a hot topic in both philosophy and robotics lately. And from my perspective, the most interesting part of Annie’s work, is this notion of agency or control. Who is responsible for the software’s actions? — whether it be drones or drama. In some ways, the creators of the play relinquish control of the production to the algorithms. They hit start at the beginning of the performance, and magic happens. Every night is a different, unpredictable performance. Random number generators, equations, and conditionals guide and mis-guide the narrative. But in an important sense, this type of production is one of the most controlled plays ever. Everything is predetermined, and nothing is left to human improvisation. There is no pesky spotlight operator to miss her cue or actors to make a slightly late entrance or improvise the delivery of a line. And not only is it deterministic and in some ways eternal, but that stems from a human encoding this knowledge in a computer program. It isn’t eternal because it’s a machine, it’s eternal because the play is formally encoded.
I asked Annie a question relating to this notion of control and she had a great response about this work truly investigating the notion of collaboration. Collaboration between humans, and collaboration with machines. By formalizing the dramatic decisions in code, a very systematic exploration of collaboration and decision-making is possible. A very exciting approach to exploring agency, and I look forward to the final production next year.
My colleague in the computer science program at Bard, Sven Anderson, is teaching our introductory computing class. The course teaches object-oriented programming using the IPRE robot platform and python. And although robots are error-prone with their noisy sensors and motors, eat batteries, and make it difficult to grade student programs, few pedagogical approaches to CS1 can provoke the following reaction:
Tonight we took a group of Bard students to Vassar to see the play Truth Values by Gioia De Cari. The one-woman-show depicts her “Romp Through M.I.T.’s Male Math Maze.” It was a funny, touching account of the strange world of mathematics research and graduate education — first encounters with the research kind. The play made me reflect on my own experience in graduate school managing incoming expectations with sober, if foreign, realities.
Truth Values also pointed to a few lessons we computer scientists might heed. I would like to highlight a stark contrast between the reality of computer science education and research and that of mathematics. The gender imbalance in computer science is awful, at both the undergraduate and graduate levels. This makes it hard to tease out exactly why this imbalance exists. The story with mathematics education is different. And that just might provide computing educators some insight.
Like many, my complaint with the MOOC isn’t the MOOC itself, but the surrounding hype. My complaint is simpler than the eminent devaluation of teaching in terms of personal and public investment thanks to the MOOC marketing (as Bogost has argued and others have provided evidence). My primary complaint with the hype is the hubris.
The MOOC hype grossly underestimates the computational difficulty of teaching.
“… the computer is being used to program the child. In my vision,
the child programs the computer and in doing so, both acquires a
sense of mastery over a piece of the most modern and powerful
technology and establishes an intimate contact with some of the
deepest ideas from science, from mathematics, and from the art of
intellectual model building”
— Seymour Papert in “Mindstorms: Children, Computers, And Powerful Ideas”
Computing is a Liberal Art is my personal blog on computing, robotics, liberal arts education, and various other things. Why? Students studying the liberal arts and sciences need lucrative, creative career opportunities. The tech industry needs a more diverse, creative work force. Moreover, it is imperative that all citizens have a role in shaping tomorrow’s technology, not just the technologists — assuring all citizens are programming rather than being programmed.
As computation plays a larger role in our professional, personal, and civic lives, thinking and communicating algorithmically has become as important as literacy in any other sense. And although being skilled at a particular computer application is often a prerequisite, it isn’t the whole story. As Papert (a mathematician, computer scientist, and student of Piaget) remarked concerning the potential of computing, “a revolution in ideas that is no more reducible to technologies than physics and molecular biology are reducible to the technological tools used in the laboratories or poetry to the printing press.” Being truly computationally literate means one can begin to think using the symbols and ideas of computation to solve problems or make art in new ways.
Some related takes on computing & the liberal arts: