But the real problem is when you get larger images. If you want to allow people to draw on full resolution photos, you can't use the method above. It just uses too much memory and goes way to slow, because you are basically making a very large context that is getting redrawn every time you do something, and the apple libraries don't seem to be designed for this. So the above method works ok if you only want something the same resolution as the screen, but larger sizes you need to use OpenGL to do it.
So the challenge with OpenGL is the max texture size (1024x1024 on lower devices), so I had to make some code that splits an image into segments, and then wrote another layer above this to paint with (which is basically rending a whole lot of point sprites onto a texture) this actually seems to work really fast. Then you can just change the fragment shaders to produce different brushes. What you can also do is create different texture layers, that can be blended together, creating something like the layers in photoshop. This approach is fast, and has a low memory footprint compared to trying to use a UIView directly. I mean it is still a lot of memory when dealing with big images, but this is the best way I have found.