Very much enjoyed how Alan Kay (probably in 1987) is building up in this talk with a nice example towards context that is doing most of the thinking for you and then compares it to choosing the right data structure that helps computing results just through their inherent structure.
I transcribed the three crucial sentences starting at 3:55:
If you want to be good at solving a problem and acting much smarter than you are then you have to find your context so its gonna do most of the thinking for you.
Most computer scientists know this because it goes under another heading called “choose the appropriate data structure before you start tinkering around with the algorithm”. Find the right data structure, it will have most of the results computed almost automatically as part of its inherent structure.
- – Addition Feb 22, 2014
Also enjoyed this statement from Michael Blaha in this interview about UML for Database Design:
Often the most difficult aspect of software development is abstracting a problem and thinking about it clearly — that is the purpose of conceptual data modeling.
A conceptual model lets developers think deeply about a system, understand its core essence, and then choose a proper representation. A sound data model is extensible, understandable, ready to implement, less prone to errors, and usually performs well without special tuning effort.
Recently I had a dream that span across a whole morning through several wake-up/half-awake/fall-asleep iterations. It was about trying to understand an algorithm; some sort of long-winded structure that describes a mechanical process. I can’t quite remember. Anyhow, what I do remember was my ongoing attempt to grasp what this thing does as a whole. I remember sensations of understanding parts of it but failing to comprehend how the parts interact. I remember sensations of “comprehension-pockets” blurring into confusion again when trying to focus on the next higher level of functionality. Interestingly I showed more persistence in this ongoing mental scenery as i probably would have demonstrated when facing a challenge of that type while awake. There was no frustration building up – it was more like a stoic inquiry into the right sequence of understanding parts that would lead to understanding the whole…
I’d like to use the incident of this dream to share some thoughts i am having as a result of the last few years in my biography that are best described as follows: running from academia, being immersed in “alternative” (experience-based) education and since winter 2012 being back on track with computer science and mathematics in an academic context.
At this point i can best follow through with this intention by posting a revised version of an email I sent to the Viewpoints Research Institute in Los Angeles a few months ago (as a motivational letter explaining what attracts me to them for my internship in summer 2014. I ended up choosing an internship at the LRZ but would be very happy to have the chance at some point to join forces with the VPRI, in particular to work with (=learn from) Alan Kay).
I’ve been fascinated with math, programming and algorithms since I saw the mandelbrot set the first time as a result of my own code in the middle of some night during my time at secondary school. [...] As a result of the past years I got very sensitive to what real content created through deep original thinking is in comparison to shiny packaging (which does have its place though) of the work of someone else or even just the repeating of phrases within a social club.
I am too young to have experienced the “early days of computation” myself – but I am eager to comprehend this storyline not just in names and pictures (that too), but also by really understanding what kind of possibility spaces were unlocked through the contributions of the various people thinking in this sphere over the last decades and how they were building on the shoulders of thinking from the last centuries and so forth. There is more to computation then evermore faster processing units and evermore satisfying customer-experiences. And I have a feeling at least the past two decades have mostly focused on that business/excitement-part of computation. However, I am not bold enough to claim to know what that untapped potential is all about.
The second point is the focus on learning. I am a very visual thinker and need to “see” the patterns and dynamics of things to tie new knowledge into my brain. It makes me curious what can possibly be misunderstood about a particular problem because understanding possible gaps or “bad” entry-points requires comprehending “the space around” a particular problem. Like Alan pointed out in his presentation about conveying Pythagoras’ theorem; there are many different conceptual approaches how to go about explaining – but each of them has a different value attached as to how well it prepares the brain on the receiving end for similar problems in the future. Most approaches can help understand this particular problem for the next exam – but might very well cause confusion later on when other problems are explained with a very different conceptual strategy. On the other hand can an approach set a “neuronal base” for thousands of other problems to come in theoretical and very practical situations that a person might encounter throughout a life-time!
I’d dare saying to care about the quality of the knowledge-graph in the brains of people is the most important investment we can make as a society. And it is also the most expensive investment – because just to be open to (wanting to) hear anything about Pythagoras’ theorem you need a stable infrastructure surrounding you that allows for peace of mind in terms of covering basic and social needs. It can not be underestimated how much an open, curious and peaceful mind is worth in very real costs when seen statistically across a society. However, as a result of that rich infrastructure it becomes increasingly important among those who benefit from it to deal with the various distracting elements (entertainment, status…) long enough for new knowledge to get anchored. A brain after learning something is not the same as it was before. The neurons are physically restructured. It is that focus on salience in patterns that sets apart the noise from the trajectory into the future. Not in a sci-fi / trans-humanistic / futuristic / singularity / whatever-way, but in a very real causal way.
Not necessarily restricted to programming languages. First step would be to find out what the person knows through direct checkboxing or through test-questions. Next step is to find out what the person wants to learn. Then search for tutorials, teaching materials, analogies, stories etc. to match that specific mapping of “already-know to want-to-learn”.
Of course such a mapping-database has to be build first. So i’d start with a large survey where people submit their mappings. Then there would come a phase of identifying people who can create (or select / adapt existing) teaching materials for all the categories. So an experienced C++ and Java programmer could sit down and think what kind of snippets, analogies or tutorials could help someone who already knows Java and wants to learn C++…
Anyone who has gone through learning material can tag it with other potentially relevant mappings (“escalate the relevancy of the content vertically and or horizontally”) a’la “i think this material could also benefit people who want to transition from a meat-based diet to a vegetarian diet” (unlikely that this example fits together with the previous C++ > Java example… but who knows).
Playlists could be assembled a’la “this really helped me to get to B based on the A that i already knew – now i can easily get from B to C and i recommend this order of learning instead of trying to get straight from A to C”.
Of course also some gamification with rewards and stuff that incentivises both the creators of teaching materials as well as the learners.
I think if you can tie the desired learning very specifically to things your student already knows you can get very efficient in building new territory in the knowledge-graph of the students mind. Somewhat like a personal master-student setup but on large internet database kind of scale?
[Addition Dec 29, 2013] I would like to add a critical voice to this idea. “Sideloading” knowledge by specifically building on what you already know is great for quick access – for hacks. Nevertheless there is a strong point to make about knowing something from the ground up and learning the language and metaphors of a domain independent from access points it may have to other domains…
One of these situations that makes me stop and smile about myself and reflect about patterns, habits and just how i (and possibly everyone else) work… and it’s time for a fresh post here – so let me share the story with you.
Just when i switched off my computer at work i remembered something that i wanted to do online. Ok, i’ll do it first thing at home. But how to remember? Yeah, i could send myself an email from my iPhone as i have done it thousands of times before. But hold on – actually there is a little physical thing on my desk that will most definitely help me remember once i retrieve it from my trouser pocket or backpack at home. But what now – pocket or backpack? Which option is more likely to yield the desired effect; namely the token “falling into my hands” at home without having to remember that it is there? How can i outsmart my future forgetful self basically. I choose the pocket – knowing that i might not have reason to check my pocket again right this evening, but surely at some point throughout the coming days. The latest before the next washing, because that (pocket-checking of trousers before putting them in the washing machine) is a routine i can (almost) certainly trust my future self to follow. So i think to myself: the future-reminder will definitely not be lost, it will trigger the very latest in about a week from now. Which i deem acceptable as the matter isn’t that urgent.
Alright then, i bike home along the Isar river through a rainy bit still lovely Munich. My thoughts wander around and the memorization in question here is quickly lost. UNTIL…
I come home, unlock the door… and have a funny moment of thinking where to put the key before taking off my shoes. Because apparently i learned from my forgetfulness of the previous weeks where i repeatedly left the key after entering the apartment either on the entrance-shelf or on the kitchen-shelf. After taking of my shoes i would forget that i put the key there and it would stay there until the next morning when i would have a short freak-out moment searching for my keys when packing my backpack for the day. Sooo, i decide to put the keys in my pocket… and GUESS WHAT i discover in my pocket when putting the keys in! Yep, the memorization token, right there!
To sum up; the result of a learning-loop regarding key-placing-forgetfulness lead to the immediate (instead of max. one week until the washing machine) retrieval of the memorization token. And as a result to the online action that the token was “charged with”.
Cool, check, done, NICE! Then i went for a run along the Isar and found myself continuously amused by this story and thinking about how clever it would be to analyze ones daily workflows/routines with the intention to identify these “pockets” (now meant abstract, not (necessarily) the trouser pocket). Pockets where memorization tokens can be placed physically (or virtually) because of an action that you will reliable perform in the future within a certain timeline and that’ll make you “automatically” retrieve that very token again… strategically capitalizing on your reliable patterns to memorize things in a way where you can forget them with the comfort of knowing that they WILL find you again.
Is there something like a scientific politics simulator (or do i have to build one some day)?
I’d like to feed the election program of a party into a simulation engine that mimics “the real world” as best as possible and projects societal development forward given the parameters/claims/guidelines in the election program.
Beyond algorithmic truth-checker-approaches (aka automated verification of statements based on scientific sources, that already exists) to statistically robust simulations.
Only to supplement the discourse of course, never to solely base decisions upon.
Open in a way that programmed/designed data-pools like <82 million Germans> or <international law> can be fed into an evermore comprehensive simulation-landscape. Data-pools could be “tested” for the “realness” by simulating decades/centuries that have already happened. Take data up to 1980 for instance and then simulate up to 2010. Compare the results with what actually happened and go back to refining your data-pools for it to be a closer match the next time. And of course this wouldn’t be the only test for data-pools. Tests would have to be an open marketplace itself…
Together with a fellow student i had the assignment of giving a presentation about the important traveling salesman problem in the Applied Mathematics course at Uni. It’s fairly low-level since we are in the 2nd semester at this point – but i put some good effort into visually explaining the concept of both brute-force and dynamic programming by going in depth with explaining how to build the necessary graphs and permutation-sets in Java. I’d like to share it so it can be helpful to someone else.
This [link] points to a folder in my Dropbox where i gathered all the content from my part of the presentation. I added an English version of the main slideshow and readme-files with instructions. You might be best off by downloading the whole folder (“Download” > “Download as .zip”) and then explore it’s content on your machine.