?

Log in

The Broken Hut
Working my way up to a full-size building
Recent Entries 
16th-Oct-2007 11:35 pm - Hill-climbing as a search technique

Photograph of hills

If you’ve spent any time hillwalking at all you’ll be familiar with the “false summit effect”. From afar you can tell where the top of the mountain is, but as you get onto the slope itself your view is occluded. You fix your sights on a little hill just in front of you that blocks your view of everything else. But just as you reach this summit, you find that there’s more to come. And in the worst case, you might have to go back down from the mini-summit you’re on to reach the slopes of the higher part of the hill.

That is the false summit.

Searching for things in computer science is often likened to climbing a hill, where the summit represents the goal, and every point on the landscape represents a candidate for this goal. The idea is that, without knowing where the summit is, one can get there by always walking up hill. Assuming an ideal (smooth) landscape one could get to the top of a hill blindfold. (Not recommended.)

The flaw, which I’m sure you’ll have spotted, is the false summit. If we can’t see where we’re going then we can never be totally sure if we’re on a minor summit rather than the very top of the mountain. This is called a local maximum—every direction you walk goes down, but there is a direction which will eventually take you to a taller peak.

The solution is to add a bit of randomness. If you’re at the top and don’t know where to go, spin round in circles until you’re dizzy and then strike out in the direction you end up facing. (I don’t suggest you do this literally either. I’m not going to be responsible for dizzy people hurling themselves off hillsides.) With a bit of randomness added to the search pattern you can break out of the local maxima.

Of course, this is one of those circumstances where you can never be totally sure you’re at the summit. Though if all you’re looking for is a hill high enough then it will fit the bill adequately.

Photograph credit goes to Christof Autengruber on Flickr.

The next instalment in this exciting saga of family feuds and cyberwarfare. Tybalt has sent an invitation email to Romeo, impersonating Juliet in order to tempt Romeo to reveal himself.

Romeo opens his mail and finds this message waiting for him:

From: “Juliet” juliet@capulet.net
To: “Romeo” romeo@montague.net
Subject: secret meeting

Come to the town square at midnight, behind the clock.
Come alone and make sure you’re not followed.

J. xxx

If he takes the bait and travels to meet his love, who knows what terrible fate will befall him?

Luckily, we don’t have to worry, because Romeo and Juliet have been smart. They’ve taken precautions.

Click to read more...Collapse )

Forging an email is the easiest thing in the world. Once you see how easy it is then I think you’ll understand why you should never trust the From header in an email.

A while ago I used the Montagues and Capulets to explain how the domain name system worked. I’ll do the same again to show how easy it is for anyone to spoof your email address.

The plot thickens!

As per the story, Romeo and Juliet are separated after the party. Tybalt wants to kill Romeo and knows he can use the lure of Juliet to trap him. Tybalt’s email address is tybalt@capulet.net but he wants to email romeo@montague.net as Juliet.

Ordinary email programs don‘t allow you to pretend to be someone else (though they could if they wanted). But when a program sends an email it is just having a very simple conversation with a mail server using a predefined protocol. So all Tybalt needs to do is have that same ’conversation’ with the mail server.

A program called telnet lets you get down to the gritty details. You can pretend you’re an email program, a web browser or anything else, as long as you give the correct response to the questions you receive from the other computer.

Tybalt gets started

First, he has to log in to the Capulet family mail server using telnet. The line with the dollar sign is where he runs it from the command line. You can easily try this at home if you know the name of your mail server.

$ telnet mail.capulet.net 25
Trying 192.168.2.14...
Connected to mail.capulet.net (192.168.2.14).
Escape character is '^]'.

The mail server sends messages prefixed with a number. This is the status code which your email program would recognise and respond to. The words on the rest of the line are put there for the benefit of people who want to test the system at this low level (or subvert it). Any line which doesn’t begin with a number is written by Tybalt.

First, the mail server identifies itself and then Tybalt does likewise—and pretends to be Juliet’s laptop.

220 mail.capulet.net ESMTP
HELO julietslaptop

The mail server then shows that it’s ready to take commands. This is where Tybalt pretends the email is coming from Juliet’s address and going to Romeo:

235 Nice to meet you julietslaptop
MAIL from: juliet@capulet.net
250 OK ... Sender accepted.
RCPT to: romeo@montague.net
250 OK ... Recipient accepted.

Then Tybalt has to tell the mail server to receive the content of the email, using the DATA command. Notice that he puts To and From information in this part of the message too. If he omitted these then Romeo would still get the message but the To and From headers in his email program would appear blank. This is like putting ‘Dear Romeo’ and ‘from Juliet’ inside the letter—the bit above is just the address on the envelope.

DATA
354 Ready for message. Enter "." on its own line to finish.
From: "Juliet" juliet@capulet.net
To: "Romeo" romeo@montague.net
Subject: secret meeting

Come to the town square at midnight, behind the clock.
Come alone and make sure you're not followed.

J. xxx
.
250 OK Message transmitted ID 82679401

The dirty deed is done. Tybalt can log off and head out to capture Romeo unawares.

Or will he?! Find out next time…

28th-Jul-2007 07:48 pm - Information theory and message codes

Some more on information theory. Last time I covered issues to do with the reliability of the ‘channel’, the medium used to transfer the data between sender and receiver. To summarise what was in the last post:

  • Information is data transferred between sender and receiver, so that the receiver then learns something that they didn’t know before.
  • All transmission channels have some error rate, so it is useful to be able to talk about, and calculate the effects of, these errors on data we transmit.
  • Adding various types of redundancy can mitigate the errors. We know how much redundancy to use because we can make accurate calculations about error rates.

This deals with the problem of transmission, but we still don’t know how to create the message in the first place. What is the most efficient way of turning my thoughts into something which will travel over the channel? This problem, called coding, is an age-old problem. It’s the problem of translating thoughts into voice or words or pictures.

Read more about the coding problem and how information theory helpsCollapse )
11th-Jul-2007 07:18 pm - Information theory and redundancy

In my last post I introduced information theory as one of those obscure ideas which creationists love to misrepresent for their own purposes. Well, I can’t stand to see good science have its name besmirched like that, so let’s rebalance things by looking at the good stuff in information theory and what we use it for.

Information theory was invented in the 1940s to deal with the problems faced by long-distance telecommunications — the original paper was written by Claude Shannon under the title A Mathematical Theory of Communication. Since then it has been expanded upon and applied in nearly every aspect of communication, computing and more besides.

‘Information’, according to the theory, is the individual bits which make up a message which has to be transmitted. They could be computing-style bits (ie, binary digits, ones and zeroes) or they could be individual letters from the alphabet. The problem is that the sender and receiver are some distance apart and so they have to use some indirect means of transferring this information, called a ‘channel’.

Information is passed along a channel. This is an abstract way of saying “dots and dashes are passed down telegraph wires” or “smoke signals are sent into the sky” or “words are spoken into cocoa tins tied together with string”. The actual type of channel doesn’t matter for the theory, only that there is one. When the receiver picks up a signal they have to pull the sender’s information out from all the background noise.

And therein lies the problem. It would be dead easy to pull out the signal if you already knew what it was, right? You’d know exactly what to look for, but it would also be pointless. You wouldn’t have learned anything, so technically the message would have an information content of zero. The other extreme is a message that could be anything, such as a stream of random numbers. This has high information content because each number is completely unpredictable, even if you know all the others. With each new number that arrives, you’ve “learned” something.

If a channel is noisy then many of the bits which are transmitted get corrupted before the receiver sees them. The reliability of the channel is the probability of some bit passing through a channel unscathed. If you a 100% reliable channel, and you’re receiving random numbers then you know they are all correct. But no channel is completely noise-free, so how do you know which random number is intentional and which is a corruption?

The answer is redundancy. If the channel is not very noisy, you hardly need any checks at all. But if the channel is really unreliable you may need to send the same information several times to overcome the corruption. One of the big contributions from Shannon’s paper was the ability to calculate how much redundancy you need. You can tell how much information you’ll lose during transit and so decide how much extra resource you wish to spend on increasing reliability.

You can see this use of redundancy in the real world with police radio codes. Police walkie-talkies are low quality, and they make lots of crackle and fuzz. So the officers don’t use single letters when communicating car registration numbers: they use whole words. Alpha Bravo Charlie has a lot more redundancy than A B C, so it’s clearer when you add a bit of static — ?lph? Bra?? Cha?lie.

If you add redundancy then you increase reliability, but you also take up more room. It means more data to transmit. This slows down messages which isn’t good for anybody. Shannon also covered this in his paper, and I’ll talk about it in my next post: compression.

10th-Jul-2007 02:07 pm - Half-truths and information theory

According to Darrell Huff, it’s easy to lie with statistics. But it’s just as easy to lie with any kind of science, for the same reason — there’s a lot of knowledge out there and most people aren’t familiar with the basics. So people can come up with intuitively plausible ideas like homeopathy using some quantum-theoretic means of healing, and many people will accept it.

Creationist lecturers make a living from this ambiguity: giving just enough knowledge to tempt the audience into making a leap to a fallacious conclusion. One of the examples the Intelligent Design creationists like is something called Information Theory, which they misrepresent and abuse readily.

Information Theory sounds like it might be about meaning and purpose. Of course, information in that sense is entirely in the eye of the beholder. (Imagine having a crossed line on the telephone while you’re talking to a friend. As far as you’re concerned, the other conversation going on is ‘noise’, while yours is ‘information’; the other people think the exact opposite.) There are no formalisms for deciding how useful something is. But the creationists don’t clarify what is meant by information; they just let you make assumptions because it suits their purpose.

In information theory, a message has information if you can’t predict what it says before you read it. So a dice with a single spot on each face provides no information — you knew it would show a 1 before you rolled. An ordinary dice is unpredictable, so when it stops you’ve learned some piece of information. This is how information theory regards information: as something which informs.

Whether it is interesting or useful is irrelevant. In fact, the most informative message possible is one which consists entirely of random numbers, since it has the most unpredictability.

As well as not defining what is meant by information, creationists then claim that information cannot be created. (This is why they bring it up: to show that complex DNA cannot evolve.) But from the correct definition given above, we can see it is very easy to create complex information. Any noisy signals — leaves rustling, background radiation — are fantastic sources of information.

They mean to say that structured, interesting information cannot be spontaneously created, so genomes require a designer. But in this regard, information theory is not helpful, so they lie about what it says and hope to create a convincing argument from the half-truths and fancy mathematical terms.

In further posts I’ll talk about the good stuff in information theory, and places where you’ll come across it. Tune in next time!

12th-Jun-2007 10:05 pm - Propagation of computer worms

The field of epidemiology studies the health of populations with any eye to detection and prevention of illness.

Disease is fascinating in any light, and as our world shrinks due to global travel we’re likely to see a lot more of it in new guises. Single cases of disease in exotic places are a potent threat to major population centres, because that single person can travel round the world in a single day.

Securing yourself against infection is a proper arms race: infectious agents and defences improving in lock-step, forever exploiting and then being beaten back.

Nothing about the above paragraphs is unique to biology. In the early hours of the 25 January 2003 the fastest-spreading global infection ever seen first began to take hold, on the internet.

The infection, known as the Slammer worm, was the first of a new kind of Warhol worm — one that would spread as fast as it could within its “15 minutes of fame”. This epidemiological analysis of the appearance, spread and weaknesses of the Slammer worm make fascinating reading for the geeky. This is what happened.Collapse )

7th-Jun-2007 01:58 pm - Fixing broken programs the easy way

All software of sufficient complexity to do something useful (and even some that don’t do anything interesting at all) will contain bugs. This is a fact of life.

It’s the job of the programmer to find and remove these bugs. There are many ways of going about this but the most interesting, I think, is using a tool called a debugger.

If you’re a doctor, probably the best diagnostic tool you could imagine is to take a patient and cut them open and watch their innards move around, undisturbed, as they go about their daily lives. Imagine you could puncture a huge hole in someone’s chest to examine their heart and lungs while they sit there in the consulting room, chatting away amiably.

Unfortunately for the medical profession, people tend to cry out in great torment, flop on the floor and bleed everywhere if you do all that. (Though something very similar does come up in Use of Weapons by Iain M Banks, where it’s used as a party gimmick!) But programmers can get away with it.

A debugger is a tool that lets us run programs with their innards on display to the world. If something bad happens we can rewind to see what conditions caused it. We can see the last actions of the program before it did something stupid. Why did the user’s high score suddenly change to -2147483648? Why does it crash every time I click Save?

The debugger will even let you ‘run’ the program in discrete steps, like advancing a frame at a time. Or you can pause the program whenever you need — on every autosave, or every new wave of aliens, or just as the whim takes you.

If you see a problem you can tinker with the program as it runs. It would be like your doctor reaching in to ‘fix’ up your plaque-hardened arteries while you are in suspended animation, then releasing you to see how you feel. Pretty neat.

Debuggers are not much use outside the programming field. In fact, they’re one of these specialised tools that people can’t even imagine until they’ve seen one. But they’re also extremely cool, which is what counts.

Artificial intelligence as a subject of stories goes back a long way. The Jewish story of the golem, the tin man in The Wizard of Oz, Pinocchio, Pygmalion — they are all tales of artificial people looking for some element of humanity.

This is what a lot of the old stories focussed on. I suppose it was after Mary Shelley’s Frankenstein that storytellers used artificial creatures as sources of horror. They started to get less tangible too — taking on roles that would previously be taken by malicious gods or spirits. The characters of SkyNet in Terminator and HAL 9000 in 2001: A Space Odyssey are difficult to pin down. They appear to exist everwhere, controlling the environment through computer networks.

Obviously these stories have moved with the times, as public perceptions of computers have advanced. The robot/woman in Metropolis or from 50s pulp scifi are more closely related to the Mechanical Turk than to the Puppet Master from Ghost in the Machine or the Red Queen from Resident Evil.

The real AI research done these days is (obviously) somewhat behind this curve. There are actually two reasons for this. The first one, of course, is that AI is really goddamn hard. I mean, even turning written sentences into something a computer can understand (called natural language processing) is a huge field fraught with terrible difficulties. The language we use to speak to each other is so filled with ambiguity and relies so much on context and implicit knowledge in the reader. If you haven’t seen any of the films or know any of the stories I’ve mentioned above, would you understand this post?

Even when I’m not discussing dodgy scifi there’s no reason for you to understand everything I say. There’s a classic ambiguous sentence in linguistics and AI which is always trotted out on these occasions:

Time flies like an arrow.

There are at least five different meanings for this sentence. The fact that you’ll initially think “time travels quickly and in a straight line, like an arrow flies” is only because of a huge amount of cultural knowledge that’s really hard to explain to a computer. (If you’re wondering what some of the other interpretations are, here’s Groucho Marx with the first one: “time flies like an arrow; fruit flies like a banana”.) To bring syntactic ambiguity back into the world of Hollywood, I’d now like to quote that classic exchange from Wayne’s World:

“Can I be frank?”

“Can I still be Garth?”

The second reason why artificial intelligence is not as advanced as it could be is rather ironic: hubris. Considering that the stories of mankind’s creations exceeding our control are meant to teach us humility, it is the same humility that might well have helped AI research along.

In the late 60s and early 70s the great results achieved in AI research were dwarfed by the expectations. It was a classic case of the war being “over by Christmas”. No matter what spectacular insights or advances into computer vision or knowledge representation were made, they were guaranteed to be well beneath the fantastical claims made.

A few damning papers were released, rubbishing particular areas of AI research. Consequently, the whole field received a massive downturn in funding for nearly twenty years. Fortunately a lot of research was continued under other names.

But it seems that the new humility learned from this “AI winter” has reaped some rewards. The principal, I think, is the understanding that artificial intelligence is not limited to “creating artificial humans”. In fact, it’s often said that “AI is whatever we do not understand”. When we understand it, it becomes one of the ordinary tools of computing. This is certainly borne out by the things AI research has given us: mundane things like search engines, spam filters and computer chess programs.

Either way, AI will continue to provide interesting new technologies for us in the real world, and just as importantly provide a means to explore our humanity in art.

10th-Apr-2007 08:50 pm - Hollywood Computing: Programming

Programmers in the movies get a pretty raw deal, I think. For a start, it’s very rare for them to use any real programming languages. Most of what they seem to do falls into two categories:

  1. Typing at very high speeds while screeds of random alphanumeric characters fly up the screen. See The Matrix for examples of such silliness.
  2. Manipulate 3D blocks into interesting patterns and then inform the computer — by talking to it — to “upload virus”. See Swordfish for this one.

I’ll ignore type two, because I think everyone can see this is an absurd fantasy. Controlling advanced high-powered electronic devices is not at all similar to playing with lego.

The first category is at least based in something that resembles reality, but (a) it doesn’t resemble reality very well and (b) it’s also many years out of date. (By which I mean nearly the entire history of computing out of date.)

Read more...Collapse )
This page was loaded Sep 3rd 2015, 9:22 pm GMT.