You may have seen the program Zoomify, or similar programs like, OpenZoom and DeepZoom, which quickly display large resolution images on websites without large amounts of data being downloaded – only the part which is needed is downloaded. Most of these applications use some kind of plugin to work (e.g. Flash or Silverlight), so I wanted to see if we could create one without any plugin, using only the HTML5 standards.
Continue reading “CanvasZoom – HTML5 Canvas and code”
Subversion – folders and projects
This post is about folder structures and project management in Subversion.. If you don’t already know what Subversion is check out the previous post: http://blog.akademy.co.uk-tips/2009/09/source-control-beginning-with-subversion/
Folder Structure
The folder structure you implement in your repository can have significant impacts on how well your project gets managed, now and in the future. With the right forethought you can implement things like:
- release version tagging,
- multiple simultaneous version releases
- and have code, installers and build systems all in the same place.
Tags – release version tagging
Tagging is the term used to “mark” a certain version of your files as important in some way. Most often this will be to mark public releases of your code, this enables you to go back to that specific version even after other changes and version releases have taken place. In Subversion there is no specific tagging mechanism, instead you simply take a copy of the code as it is at the moment. Now this sounds like it would use a whole lot of memory space but you need to remember that Subversion only ever records the changes between files – a tag will not not have any changes in so is virtually copied for free.
To keep track of all your tags you’ll need somewhere to put them and in subversion this means a folder. So in the top level repository folder have a folder called “Tags”. Lets assume you have another folder called “Main” for now which contains your source, to create a tag you’d just copy it, so type something like:
svn copy file://your-path/TestProj/Main file://your-path/TestProj/tags/release-1.0 --message "1.0"
Branches – multiple simultaneous version releases
To work on multiple versions you’ll need to once again copy the main source code. You might need to work on two different releases if you have two major changes to the code which you want to test separately, or you need to make bug fixes in an old release but don’t want to mix up the new code, or you just want to try an idea but don’t want to mess with the main source.
You’ll need to keep this code separate also so create another folder at the top level of your repository. We’ll call this one “Branches” as you are copying the main code to subsequently change it – it branches away from the main part. Now You’ll just need to copy the main code, so type:
svn copy file://your-path/TestProj/Main file://your-path/TestProj/branches/newtest --message "fixing"
This command is near identical to the tag one. This is because all you are “really” doing is making a folder and copying some files to it.
Main (or Trunk) – code, installers and building
When you read up on subversion you’ll come across the folder “Trunk”. Once again, this is subversion so it’s just a folder, and it’s where you main source is edited from. (It’s called “Trunk”, because it’s where the “Branches” come from!). The above paragraphs have called in “Main”.
I tend to split my main folder into several others to make the full building of a program easier. Typically this will be something like “Source”, “Installers” and “Builder” – “Source” holds things like c++ files, “Installer” will hold the files needed to create an installer, and “Builder” is the files needed to automatically build all the bits into some kind of release, such as a CD; (you might like to add “Content” or “Manuals” or something else to your own). This is very useful when you need to tag something as everything needed to rebuild a specific release will be copied into that tag.
Trunk, Branches, Tags or anything.
Obviously the folders you’ll need will depend on your own project. Don’t be afraid to experiment with your own names and structures – the folders are yours to play with!
A few caveats on changing from “Trunk”, “Branches” and “Tags”
- Some client programs can automatically create tags and branches if you stick to these folder names.
- Much of the documentation will make reference to these folder names.
Multiple projects
Up until now I have assumed a single Project in your repository but that’s not the only way to do it. There’s two main ways to store multiple projects in Subversion, each with positives and negatives.
- Create multiple repositories, one for each project.
- Create a single repository with a top level folder for each project.
Multiple repositories, single projects
This is the simplest to manage. Every new project his it’s own repository and is entirely separate from any others.
Positives:
- You can reduce the impact from a single hardware failure as repositories can be kept on separate hard drives.
- You can backup each project separately, depending on it’s value.
- You can easily give user access to a single project
- User subversion errors limited to a single project.
Negatives
- Users will need to be created separately for each repository.
- The merging of projects is very difficult.
Single repository, multiple projects
A single repository, with top level folders for each project. Each sub folder has the usual “Trunk”, “Tags” and “Branches” in.
Positives
- A single list of users is all that is needed.
- A single, simple, backup procedure.
- Merging of several projects into one is very easy.
Negatives
- Can mistakenly give access to a user for any project.
- Single hardware failure can wipe out all projects at once. Single backups also at risk.
- A subversion user error in one project can impact all other projects.
Conclusion
There is no right and wrong way to store your folders. It really depends on your use, for instance I use subversion to back up my fiction writing, and it contains no folders just a list of files.
There are some best practices you should consider though, decide how you’ll make taqgs or branches if you’ll need them, and if you are going to use multiple or single project repositories.
However, if you are looking for a quick set up, I’d recommend having a single repository for a project, then a sub folder structure of “Trunk”, “Branches” and “Tags”. In side that Trunk create a folder for each of the different parts you’ll need, although different parts can be added later.
That concludes this blog. A future blog will include subversion clients amongst other things.
PhotoSketch photoshops for you
PhotoSketch is some nifty software which allows you to creatively edit your photos by adding new parts to them.
As you all you do is make a quick sketch of your scene, label each bit with some text and, seemingly by magic, you sketch is transformed into a unique photo.
It looks like it takes your basic shape and matches them to parts in other photos. I’d imagine the parts have already been labelled inside the photo (maybe by way of GWAP). And then with some real clever pixel matching techniques the parts are embedded in a single photo background. In the above photo you can see the original photos and the sketch on the right with the new automatically created photo on the left.
More information from:
- Their homepage (looks like traffic has caused problems)
- A video description.
- More from Gizmodo
Subversion – source control
A good source control system is a must for almost all programming projects. For instance:
- You can quickly check which lines across multiple files you’ve changed.
- Revert changes to any previous date.
- Never lose any file again.
- Keep track of why a file was changed, by whom and when.
- Take snap shots of release code versions.
- Manage multiple engineers working on the same code base.
- Automatically combine code from multiple engineers.
- Split code development into various paths then quickly combine at a later date.
Over the years I’ve used many different source control systems – SourceSafe, CVS, Perforce, CodeSafe – but the one I’m most impressed with is the only one I use at home and the one I encourage at work: Subversion.
Subversion is an open source, source control system (and they “eat their own dog food” – explained). It’s available on Linux, Windows and Mac and you can freely download it from their website.
If you used CVS before then you’ll probably understand how it works, but if you are more of a SourceSafe person (I feel for you) then Subversion takes a little explanation. Subversion works like this:
- First you create a “repository”, this will be where all the files are stored, lets call it “TestProj”, you’ll normally create this on your server so that multiple people can access it.
type: svnadmin create your/path/TestProj
- Now locate where you want your own working copy to exist, lets call it “MyTestProj” and “Check Out” TestProj, this simply creates all the files in your repositiory (there isn’t any yet) together with some other files in a folder called “.svn” which holds the admin bits that subversion needs (you’ll see one of them in every subfolder you create).
type: svn file:///your/path/TestProj MyTestProj
- Now create some files, say “TestProj.cpp”, “TestProj.h” and “TestProj.ico” in MyTestProj folder. These currently exist only on your computer so you need to “Add” them to Subversion. This just means you “are going to” add them, subversion won’t do anything until you subsequently “Commit” these changes, at that point you’ll get the opportunity to comment on your changes.
type: svn add TestProj.cpp TestProj.h TestProj.ico
type: svn commit --message "Adding first files"
- Now all your files are commiteed and the files are added inside the repository you created. Anyone else doing “Check Out” will see them in their working folder.
You can stick any folder or file in your working copy, and make any changes to the files you wish, nothing will happen to the Repository until you commit. You can “Update” you copy at any time to get everyone else’s changes or “Commit” you’re own changes back – by default there is no locking of files – if you change the same file as someone else then Subversion automatically merges them together –
…take a deep breath if you are a SourceSafe user, this will sound like witch craft…
– in almost all cases code can be automatically added as different lines will have been changed (this was hard for me to believe back when I first used it but believe me it’s clever enough to never cause a problem). On the rare cases (although this depends on project size, user number and frequency of updates) you’ll get a “Conflict” which means you’ve changed the same line someone else has and you’ll have to sort this out yourself before you are allowed to commit your changes.
There’s more information at:
- Their FAQ: http://subversion.tigris.org/faq.html
- And a very thorough online book here: http://svnbook.red-bean.com/
- There’s also the commandline help: svnadmin –help <command> and svn –help <command>
That ends the first Subversion blog, in the future ones I’ll be looking at a folder structure you can use to make your projects easier to manage (see here), some of the client apps that exist to make working with subversion easier and more efficient, backups and a few more bits and pieces.
WordPress – my new home
As you can see – assuming you are reading this on my blog – I’ve just moved over from blogger.com to my own space, notice the new web address: “http://blog.akademy.co.uk-tips” .
After having a quick look around I decided to go with WordPress, WordPress.org that is. WordPress.org is where you can download the very popular blogging PHP system and install it to your own website (WordPress.com is the website that hosts blogs for you for free).
I made the decision to switch becuase I wanted a bit more control over my blog and files than blogger could provide, and I wanted to be able to play with the back end too. I decided not to go down the line of creating one myself; usually I’d have put something together just to see how to do it – teaching myself as I went – but in this particular case, I’d have never had the time to make something I would have deemed stable enough for long term use. So WordPress.
All you need to install WordPress on your own website is PHP 4.3 or higher and MySql 4.0 or higher. Run through the installation guide you’ll get when you download all the files – it’s really quick and simple. I had my Blog installed in just about 10 minutes and most of that was uploading time.
Even better, I already had a couple of blogs on blogger.com but wanted to move them over. I thought this was going to be a bit of a nightmare but I was very wrong. I just gave WordPress the address and it downloaded everything automatically. Log in an Administrator and select Tools->Import->Blogger. Magic.
There’s also loads of themes and extensions for WordPress, which gives you a great deal of control over what you show and how you’d like it to be presented. You can really personalise it any way you’d like.
All in all I’ve been very impressed by WordPress. It’s simple to install, easy to extend, and a joy to use, and it’s used by millions everyday.
A few things to take into account if you are thinking of switching to your own host:
- You’ll have to perform backups yourself: database and website.
- You’ll have to upgrade the system yourself (although this is pretty striaght forward if you haven’t made any of your own changes.
- Hosting your own webpages will usually cost a fee.
If you are looking to host your won blog then WordPress is throughly recommended.
Galaxy Zoo Two
It’s back, and better than ever before. Now you can help classify galaxies in even more detail, but still with the excitement of exploring the cosmos and helping expend human knowledge further.
Take part here.
But to see some great evidence of why it’s helpful to take part see this page. It shows galaxies never before seen and certainly not categorised neatly divided up into lots of categories. The “anything odd” section has some really interesting objects in. More info from the Galaxy zoo blog here.
See also: Human computing power. (2008-10-20), Intergalatic Explorer. (2007-07-13)
Mars goes Google.
The beautiful Google Earth program has gone Martian. The planet Mars is now explorable in full 3D (not just an overlay).
See Olympus Mons rise above the distant horizon or fly down Valles Marines in a full 3D projection. You can even follow the landers progresses, and view some of the panaromic high resolution shots just as the rovers Spirit or Opportunity saw them.
This video from the official “unoffical” Google Earth blog clearly shows of some of the best features:
Just download Google Earth, click on the planet button in the toolbar and select Mars. Some informative pictures here too:
http://www.gearthblog.com/blog/archives/2009/02/google_earth_5_the_new_google_mars.html
Teaching programming to the masses.
Starting to learn programming early certainly has it’s benefits, the best ones are almost always the ones who started when they were young, and this leads us to the main part of this blog: Teaching Programmers.
I grew up programming the Sinclair ZX Spectrum, not an experience I’d want anyone else to try, (though it was pretty clever for its time). This was always a rather loney pursuit, and in many cases, still is. However, programming has come a long way since then, object orientated programming was a massive improvement and Garbage Collectors have improved far enough to be pretty fast and reliable.
But has the way we program really changed that much? Well, no not really. All programming comes down to opening up a file and writing symbols in certain orders that only a select few can understand. How does the majority learn what’s going on? Isn’t it a little strange that the world over uses software but a tiny minority actually know how to create it?
Well, I think so, but the good news is that programming is slowly going mainstream, and there’s several really useful and fun pieces of software available to teach it, here’s my top picks.
Alice (http://www.alice.org/) (Personal Favourite!)
With this you first create a 3D world with characters and props through a simple drag and drop interface. You then control what happens through coding. The tutorial is excellent and gets you going immediatey. It won’t be long before you are creating your own little world (See mine here)
Scratch (http://scratch.mit.edu/)
Two-dimensional images can be controlled to make all kinds of interesting games and tools. There’s a large list of examples created by people across the world.
Kudo (http://research.microsoft.com/en-us/projects/kodu/)
This one is especially for creating games and just looks really nice. The programming is just drag and drop.
Karel (http://mormegil.wz.cz/prog/karel/prog_doc.htm)
This isn’t actually a program but a fully fledge programming language. But it’s designed especially for people new to programming.
Phishing for phishers: An idea
Phishing
I was curious to know how close the dummy login page looked to the real one (I should point out at this point you should never normally even click on a link on email like this, it can be really unsafe!). So that you don’t have to try this I show two images below for you to have a look at:
As you can see, they are pretty identical (the first one is the fake one).
Taking care
Don’t worry, it isn’t hard to avoid these phishing scams. Here’s a few tips to help you catch these.
Is it likely?
Firstly, it’s actually quite unlikely that your bank would suddenly need to contact you for any reason. If somethings important, they’ll almost certainly send a letter.
Avoid links
If you do recieve an email and you think it is genuine, don’t use any links embedded in the email, instean open your Internet browser and type the name in manually, or use one of your own bookmarks if you have one.
Fake URL’s
If you do use a link inside an email (or even on the internet) it’s a good idea to check what the URL is. This appears in the box, usually at the top of your browser (For instance, this website’s URL is: “http://akademy-tips.blogspot.com/” you should see this in the box).
Phisihers usually attempt to trick you by including the real one with their own. For instance, this is a fake url: “http://akademy-tips.blogspot.com.fakingit.com/” – notice the additional text at the end “fakingit.com” – this is actually the real address of the website.
Always check the right most text of the top URL part. This is the part between the “http://” (or “https://”) and the first “/”, e.g. (in bold):
- http://www.bbc.co.uk/merlin/episodes/
- http://en.wikipedia.org/wiki/Main_Page
- http://akademy-tips.blogspot.com.fakingit.com/
- http://akademy-tips.blogspot.com/
Many modern browsers actually highlight this part for you now.
An idea
Once you’ve realised what’s going on, any information can be added into these websites. In fact simply entering dummy account information will start to put of the phishers, however a much more ingenious thing to do would be for the real companies affected by these emails to set up dummy accounts, and then enter these details into the phishing websites.
Now, as soon as these dummy details are entered on the real website a company can take immediate actions to stop them, perhaps logging and banning their IP address, so that no real accounts can be used from that position. Alternatively, with the help of the police, perhaps money transfers could be tracked as they are made.
Of course, it’s highly likely that this is already taking place. Perhaps it’s only a matter of time before these people are caught.
Let me know what you think.
Human computing power
The idea
The internet is ideal way to get many thousands of people together, and with the right task really great things can be achieved. Of course, it’s not as easy as uploading thousands of images and expecting people to look through them for you – if we don’t find the images interesting then we are simply not going to take part. One way to keep a task interesting is to make the process into a game and compete with other like minded people.
The other problem is making sure the tasks are being performed correctly. The current preferred solution is to first train the participants and secondly to randomly test them against already checked responses. This also removes any unscrupulous individuals intent on causing problems and ultimately makes the completed task more reliable.
Some examples of human computing follow.
Clickworkers
The first test of the idea was back in 2000, and was called Click Workers. It was run by Nasa and the idea was simply to select craters on Mars. The interface is quite basic, and reflects some of the early internet’s draw backs clearly lacking some of the enhancements in more recent projects, but the project proved that the concept could work.
The stardust project
One of the first projects to use some of the latest Web 2.0 ideas was the Stardust Project. Stardust was a sample return mission to collect interstellar particles passing through the Solar System. The particles were tiny and captured in a gel like substance (see image), it was as they put it
“…like looking for 45 ants on a football pitch.”
more information about the project here.
For the website the “gel” was imaged at a high resolution and small pieces then farmed off to an individuals. First each individual was given a test to make sure they knew what they were looking for, then they were given a really image of the “gel” and had to decide if it contained a particle or not. The whole idea was to pretty much search for the stuff stars (and everything) are made of, as if you were some intergalatic explorer, as they put it:
“The best attitude for this project is this: Have fun!”
good advice for anyone wanted to set up their own human computing experiment.
The galaxy zoo project
The next project to give this a try was the galaxy zoo project. The idea here was to try to classify galaxies into spiral (as image) or elliptical. There exists thousands and thousands of photographs of the night sky unseen by human eyes and just waiting for the next great discovery to be made – step up the next group of intergalatic explorers.
This project was in a similar vain to the previous one but executed with a slightly slicker interface. This also had a massive following, with quite quickly millions of galaxy classifications taking place. Participants were again trained and tested during their continual classification. Friendly competition was enhanced with high score tables and records of right and wrong classifications.
The foldit game
One of the more advanced software programs in human computing tasks is the FoldIt Game, it’s also one of the cleverest ideas and one of the most fun to do. The idea is you have to fold proteins so that they can have the right shape to combine with other proteins, these can then be used to cure real world diseases.
This one is attempting something slightly different from the other examples here. Rather than classification, this one actually wants you to solve some rather complex problems. Many of the puzzles have unknown solutions and there may even be some that have no solution.
It’s fairly simple to get started, the puzzles have a nice learning curve and the interface has been well designed. Just use the mouse to grab or shake parts of the protein to see what happens. You’ll have to download the program to try it yourself.
More detailed information can be found here.
GWAP.com
This is the final one we’ll look at here, and the idea takes various human computing tasks into the mainstream. GWAP comes from “Games With A Purpose” and there are several games here to compete in. All are primarilly designed to be fun to play but are cleverly designed to help computers recognise things like images or words.
To check that it is being correctly played, couples co-operate anomonously and try to, for instance, tag a photo with the same word, or ring the same part of an image. Doing this means it effectively checks itself, and the more people that play the games the more reliable the information becomes.
Check out more info here. And for an in depth look into GWAP and similar ideas check this video out.
Summary
It seems like more and more projects are being started that utilise these unique human abilities, and with each new project becomes an ever more ingenious idea. But the question is how long will it be before computers have the abilities to do t
hese tasks themselves? Well, with more of these projects actually aimed at improving computers in the first place, maybe it’ll be sooner than we think.
One last thing I must mention though is the darker side of human computing. It’s already been shown that criminals have used this technique to bypass the CAPTCHA login systems by employing enough humans to sort through the vast outputs.
Let me know of any other human computing projects you’ve come across.