Implementing Actors - Guild Internals
March 20, 2014 at 12:35 AM | categories: python, actors, guild, concurrency | View CommentsThis post provides an overview of how Guild Actors work. If you missed what Guild is, and how it contrasts with other approaches, it might a good idea to read these two posts first:
It starts off with a trivial actor, and showing what the basic method decorators implement. This is then expanded to a slightly more complex example. Since the results of the decorator are used by a metaclass to transform the methods in the appropriate way, there's a brief recap what a metaclass is. I then discuss how the ActorMetaclass is actually used, and an overview of its logic. Next we walk through what actually happens inside the thread. Finally the implementation of binding of late bindable methods is discussed, and due the implementation of Actor methods is remarkably short and clear.
So let's start off with the basics...
Each actor is an instance of a subclass of the Actor class. The Actor class is a subclass of threading.Thread, meaning each Actor is a thread. In order to make calls to methods on the Actor, the user must have decorated the methods using either the actor_method decorator or the actor_function decorator. If the used doesn't do this, then the calls they make are not threadsafe.
In practice, the actor_method decorator effectively operates as follows. The following:
class example(Actor): @actor_method def ping(self, callback): callback(self)
... means this:
class example(Actor): def ping(self, callback): callback(self) example = ('ACTORMETHOD', example)
Similarly, all decorators in guild.actor do this - they literally just tag the function to be modified into either an actor method, actor function, process method, late binding, etc.
That means this ...
class example(Actor): @actor_method def ping(self, callback): callback(self) @actor_function def unique_id(self): return 'example_'+ str(id(self)) @process_method def process(self): self.Pling() @late_bind_safe def Pling(self): pass
... is transformed by the decorators to this:
class example(Actor): def ping(self, callback): callback(self) def unique_id(self): return 'example_'+ str(id(self)) def process(self): self.Pling() def Pling(self): pass example = ('ACTORMETHOD', example) unique_id = ('ACTORFUNCTION', unique_id) process = ('PROCESSMETHOD', process) Pling = ('LATEBINDSAFE', Pling)
If that was all though, this wouldn't be a very useful actor since none those methods could be called.
In order to make this useful, Actor uses a metaclass to transform this into something more useful.
Recap: What is a metaclass?
In python, everything is an object. This includes classes. Given this, classes are instances of the class 'type'. A 'type' instance is created and initialised by a call to a function with the following signature:
def __new__(cls, clsname, bases, dct):
The interesting part here is dct.
dct is a dictionary where the keys are names of things within the class, and the values are what those names refer to. Given this dictionary creates a class, any values which are functions become methods. Any values become the initial values of class attributes. This is also why we call out a 'class statement' not a class declaration.
This also means that the following:
class Simple(threading.Thread): daemon = True def run(self): while True: print 'Simple'
... is interpreted by python (approximately) like this:
def run_method(self): while True: print 'Simple' Simple = type('Simple', [threading.Thread], { 'daemon' : True, 'run' : run_method } )
The neat thing about this is that this means we can intercept the creation the class itself.
ActorMetaclass
Rather than the Actor class being an instance of type, the Actor class is an instance of ActorMetaclass. ActorMetaclass is a subclass of type, so it shares this __new__ method. Given metaclasses are inherited just like anything else, this means any subclass - like our 'example' above share this metaclass.
As a result, the above 'example' class statement is (approximately) the same as this:
def ping_fn(self, callback): callback(self) def unique_id_fn(self): return 'example_'+ str(id(self)) def process_fn(self): self.Pling() def Pling_fn(self): pass example = ActorMetaclass('example', [Actor], { 'example' : ('ACTORMETHOD', example_fn), 'unique_id' : ('ACTORFUNCTION', unique_id_fn), 'process' : ('PROCESSMETHOD', process_fn), 'Pling' : ('LATEBINDSAFE', Pling_fn) })
This results in a call to our __new__ method. Our new method eventually had to call type.__new__() as in the section above, but before we do we can replace the values in the dictionary.
The logic in Guild's metaclass is this:
new_dct = {} for name,val in dct.items(): new_dct[name] = val if val.__class__ == tuple and len(val) == 2 tag, fn = str(val[0]), val[1] if tag.startswith("ACTORMETHOD"): # create stub function to enqueue a call to fn within the thread elif tag.startswith("ACTORFUNCTION"): # create stub function to enqueue a call to fn within the thread, # wait for a response and then to return that to the caller. elif tag.startswith("PROCESSMETHOD"): # create a stub function that repeatedly (effectively) enqueues # calls to fn within the thread. elif tag == "LATEBIND": # create a stub function that when called throws an exception, # specifically an UnboundActorMethod exception. The reason is # because it allows someone to detect when an 'outbox'/our late # bindable method has been used without being bound to. elif tag == "LATEBINDSAFE": # In terms of the implementation, this actually has the same effect # as an actor method. However in terms of interpretation it's a hint # to users that this method is expected to be rebound to a different # actor's method. return type.__new__(cls, clsname, bases, new_dct)
Actual implementation of an Actor subclass
The upshot of this is the decorator tags the functions which need a proxy outside the thread to allow calls then to be enqueued for sending to the thread to execute.
This means our example class above is (effectively) transformed into this:
class example(Actor): def ping(self, *args, **argd): def ping_fn(self, callback): callback(self) self.inbound.put_nowait( (ping_fn, self, args, argd) ) def unique_id(self, *args, **argd): def unique_id_fn(self): return 'example_'+ str(id(self)) resultQueue = _Queue.Queue() self.F_inbound.put_nowait( ( (unique_id_fn, self, args, argd), resultQueue) ) e, result = resultQueue.get(True, None) if e != 0: raise e.__class__, e, e.sys_exc_info return result def process (self): def process_fn(self): self.Pling() def loop(self, *args, **argd): x = process_fn(self) if x == False: return self.core.put_nowait( (loop, self, (),{} ) ) self.core.put_nowait( (loop, self, (),{} ) ) def Pling(self, *args, **argd ): def Pling_fn(self): pass self.inbound.put_nowait( (Pling_fn, self, args, argd) )
The Actor class
From these stub methods, it should be clear that the implementation of the Actor class there has the following traits:
- Each actor has a collection of queues for sending messages into the thread.
- The thread has a main loop that consists of a simple interpreter (or event dispatcher you prefer)
Additionally, Actors may have a gen_process method which returns a generator. This generator is then executed - given a time slice if you will - by the main thread in between checking each of the inbound queues & handling requests.
The reason for this being a generator is not for performance reasons. The reason for it is to allow the implementation of an Actor stop() method. That stop method looks like this:
def stop(self): self.killflag = True
The main runloop repeatedly checks this flag, and if set throws a StopIteration exception into the generator.
The upshot of this is the use of a generator in this way allows the thread to be 'interrupted', receive and handle messages in a threadsafe manner so on.
The logic within the thread is as follows:
def main(self): self.process_start() self.process() g = self.gen_process() # set to None if fails while True: if g != None: g.next() yield 1 if # any queue had data: if self.inbound.qsize() > 0: # handle actor methods command = self.inbound.get_nowait() self.interpret(command) # if fails, call self.stop() if self.F_inbound.qsize() > 0: # Actor functions command, result_queue = self.F_inbound.get_nowait() result_fail = 0 try: result = self.interpret(command) except Exception as e: # Capture exception to throw back result_fail = e result_fail.sys_exc_info = sys.exc_info()[2] result_queue.put_nowait( (result_fail, result) ) if self.core.qsize() > 0: # used by 'process method' command = self.core.get_nowait() self.interpret(command) else: if g == None: # Don't eat all CPU if no generator time.sleep(0.01) # would be better to wait in the queues.
(The above code ignores the error handling inside the code for simplicity)
Finally, the interpret function that executes the actual methods within the thread looks like this: (again ignoring errors)
def interpret(self, command): # print command callback, zelf, argv, argd = command if zelf: result = callback(zelf, *argv, **argd) return result # if there was a type error exception complain vociferously, and re-raise else: result = callback(*argv, **argd) return result
Binding Late Bound Actor Methods
Using our Camera and Display examples from before, this means effectively doing this:
camera = Camera() display = Display() camera.output = display.input
However, that last line is changing the state of an object which is owned another thread. As a result we need to change this attribute within the thread. Using our code above, this is now quite simple:
@actor_method def bind(self, source, dest, destmeth): # print "binding source to dest", source, "Dest", dest, destmeth setattr(self, source, getattr(dest, destmeth))
That's then used like this:
camera = Camera() display = Display() camera.bind('output', display, 'input')
Summary
Guild actors are implemented using a small number of inbound queues per object to allow them to receive messages. These messages are received by the thread, and interpreted as commands to cause specific methods to called.
Decorators are used by the user to effectively tag the methods, to describe how they will used, allowing the ActorMetaclass to transform the calls into thread safe calls that enqueue data to the appropriate inbound queues.
The key reason for the use of decorators and the metaclass is to wrap up the thread safety logic one place, and also acts as syntactic sugar making the logic of Actor threads much clearer and simpler to interpret and use correctly.
The bulk of the logic of the message queue handling, along with user behaviour for an active actor, is implemented using generators the reason being to allow the threads be interrupted and shut down cleanly. Beyond that there are a small number of helper functions.
For those interested, take a look at the implementation on github .
As usual, comments welcome.
Readable concurrency in Python
March 16, 2014 at 05:30 PM | categories: python, actors, concurrency, kamaelia | View CommentsLast week there were a couple of interesting posts by Glyph Lefkowitz and Rob Miller on concurrency. Both are well worth a read. One of the examples present by Glyph is the canonical concurrent update problem. This essentially happens when an update takes multiple streps and can be interfered with. Rob's post essentially presents a solution in Go.
The core of this is that unconstrained shared mutable state in a concurrent situation is a bad idea. This is something that I've spoken about in the past with regard to Kamaelia. In fact, in Kamaelia, there were two ways of handling this. One was to essentially funnel all requests for updating values through a "cashier" component. The other was to use software transactional memory. Kamaelia provides tools for both approaches.
Guild also provides tools for both approaches. The key tool for the cashier approach is guild.actor. The key tool for the STM approach, is guild.stm.
Given Glyph's and Rob's posts had touched on ideas I've spoken about in the past, I thought it might be nice to work through the example in Guild. Initially we'll model an account as an actor, and test it with some basic non-threaded code. Then we'll test it with 3 actors randomly withdrawing cash and 1 more randomly adding cash. Finally, we'll show what happens when 2 actors both have accounts and are randomly transferring money from each others accounts. (Interestingly this last one kinda makes certain ideas of banking clearer to me :-)
Because they were the first names that sprang to mind, this post uses the names from characters from the Flintstones. I blame Red Dwarf.
Basic Account Actor
Source: guild/examples/blog/account-1.py
So, first of all the account actor using Guild. As before, first of all the code, then a discussion.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | from guild.actor import * class InsufficientFunds(ActorException): pass class Account(Actor): def __init__(self, balance=10): super(Account, self).__init__() self.balance = balance @actor_function def deposit(self, amount): # I've made this a function to allow the value to be confirmed deposited print "DEPOSIT", "\t", amount, "\t", self.balance self.balance = self.balance + amount return self.balance @actor_function def withdraw(self, amount): if self.balance < amount: raise InsufficientFunds("Insufficient Funds in your account", requested=amount, balance=self.balance) self.balance = self.balance - amount print "WITHDRAW", "\t", amount, "\t", self.balance return amount |
This should be pretty readable.
First of all the logic of what's happening:
-
We define an exception InsufficientFunds to raise when someone tries to withdraw more money than the account contains
-
We define a subclass of actor - Account. Since we need to initialise it with a balance, we must call the superclass initialiser at line 8. Our Account objects have 2 operations that they can perform: deposit and withdraw.
-
withdraw checks that sufficient funds are available. If they are, then the funds are returned as a result, after updating the balance and logging the result. If they are not, an InsufficientFunds exception is raised, which the caller thread will have to deal with.
-
deposit takes the amount of funds, updates the balance, logs the results and returns the new balance to the caller thread.
What's happening in terms of mechanics? (links below take you to the code in github)
-
withdraw is an actor_function. What does this mean? It means that the caller calls the method. In the caller thread, this places the message ((withdraw, self, amount),resultQueue) onto an inbound queue to the actor. The caller thread then waits for a response. The actor receives the message, does the work, and posts the results back down the result queue. The caller retrieves this, and returns the result to the caller. If there was an exception thrown within the actor, this is re-raise inside caller thread. As a result our withdraw function can look pretty normal. If there's insufficient funds, the caller gets an exception to deal with. If there are sufficient funds, the balance is updated, a message is logged to the console, and the amount of money is returned to the caller.
-
deposit is also an actor_function. It doesn't need to be because depositing money always succeeds, however it's nice for deposit to return the updated balance to the caller. (If the caller doesn't care, this could be an actor_method instead)
Single threaded Account user
Source: guild/examples/blog/account-1.py
So, let's use this. In the main thread we'll create the account and start it. We'll then define 3 account users who always only withdraw funds - Fred, Barney and Wilma. In our simulation Betty is the person earning money - but she's not mentioned here.
Anyway, in each iteration through this loop, Betty earns 100, and Fred, Barney and Wilma each randomly pick an amount between 10 and 160. This continues over and over until someone tries to take more money than is in the account. We then report the amount grabbed, stop the account and exit. The code looks like this:
account = Account(1000).go() fred, barney, wilma = 0,0,0 try: while True: account.deposit(100) fred += account.withdraw(random.choice([10,20,40,80,160])) barney += account.withdraw(random.choice([10,20,40,80,160])) wilma += account.withdraw(random.choice([10,20,40,80,160])) except InsufficientFunds as e: print e.message print "Balance", e.balance print "Requested", e.requested print account.balance print "GAME OVER" print "Fred grabbed", fred print "Wilma grabbed", wilma print "Barney grabbed", barney account.stop() account.join()
There's not an awful lot to discuss with this. This logic should be pretty clear. The fact that the account is in a different thread isn't particularly interesting - but it shows the basic logic of depositing/withdrawing funds. What does the output look like?
DEPOSIT 100 1000 WITHDRAW 40 1060 WITHDRAW 40 1020 WITHDRAW 40 980 DEPOSIT 100 980 ... snip ... WITHDRAW 160 310 WITHDRAW 160 150 DEPOSIT 100 150 WITHDRAW 160 90 Insufficient Funds in your account Balance 90 Requested 160 90 GAME OVER Fred grabbed 800 Wilma grabbed 610 Barney grabbed 800
Multiple threads access the Account
Source: guild/examples/blog/account-2.py
In this example, we create two new actors - MoneyDrain and MoneySource.
- MoneyDrain - This sits there and repeatedly tries to withdraw random amounts of funds from the Account, and keeps track of how much money it's tried to grab. When the account has insufficient funds to withdraw from, the MoneyDrain gives up, complains and stops() itself.
- MoneySource - This sits there, and repeatedly adds a random amount of funds to the account.
We then rewrite our simulation as follows: We still have one shared account Fred, Betty, and Barney are now all MoneyDrains on Wilma. * Wilma is the sole source of income for the group -ie a MoneySource
The system is then started, and runs until Fred, Betty and Barney have all taken as money as they can. Wilma is then stopped and the total funds reported.
# InsufficientFunds/etc as before class MoneyDrain(Actor): def __init__(self, sharedaccount): super(MoneyDrain, self).__init__() self.sharedaccount = sharedaccount self.grabbed = 0 @process_method def process(self): try: grabbed = self.sharedaccount.withdraw(random.choice([10,20,40,80,160])) except InsufficientFunds as e: print "Awww, Tapped out", e.balance, "<", e.requested self.stop() return self.grabbed = self.grabbed + grabbed class MoneySource(Actor): def __init__(self, sharedaccount): super(MoneySource, self).__init__() self.sharedaccount = sharedaccount @process_method def process(self): self.sharedaccount.deposit(random.randint(1,100)) account = Account(1000).go() fred = MoneyDrain(account).go() barney = MoneyDrain(account).go() betty = MoneyDrain(account).go() wilma = MoneySource(account).go() # Wilma carries all of them. wait_for(fred, barney, betty) wilma.stop() wilma.join() account.stop() account.join() print "GAME OVER" print "Fred grabbed", fred.grabbed print "Wilma grabbed", barney.grabbed print "Betty grabbed", betty.grabbed print "Total grabbed", fred.grabbed + barney.grabbed + betty.grabbed print "Since they stopped grabbing..." print "Money left", account.balance
Things worth noting here - we've got 4 completely separate free running threads all acting on shared state (shared funds) in a 5th thread. We're able to start them all off, they operate cleanly, and at this level we can trust the behaviour of the 5 threads - due to the fact that the Accounts actor ensures that operations on the shared state are serialised into atomic operations. As a result, we can completely trust this code to operate in the manner which we expect it to.
It's also worth noting that when the withdraw method fails, the exception is thrown inside the appropriate thread. This is visible in the output below because all 3 threads have to run out access to funds for the program to exit.
What does the output from this look like?
WITHDRAW 10 990 WITHDRAW 20 970 WITHDRAW 40 930 DEPOSIT 74 930 WITHDRAW 20 984 ... snip ... WITHDRAW 20 47 DEPOSIT 44 47 WITHDRAW 80 11 WITHDRAW 10 1 Awww, Tapped out 1 < 80 DEPOSIT 93 1 Awww, Tapped outAwww, Tapped out 1 < 160 1 < 40 DEPOSIT 77 94 GAME OVER Fred grabbed 220 Wilma grabbed 510 Betty grabbed 560 Total grabbed 1290 Since they stopped grabbing... Money left 171
Multiple threads transferring funds between multiple Accounts
Source: guild/examples/blog/account-3.py
This final example is a bit of fun, but also explicitly shows how to implement a function for transferring funds. Before the main example, let's look at the transfer function:
def transfer(amount, payer, payee): funds = payer.withdraw(amount) payee.deposit(funds)
This looks deceptively simple. In practice, what happens is someone calls the function with 2 accounts. The appropriate funds are withdrawn from one account and deposited in the other. This is guaranteed to be thread safe due to this translating to the following operations:
- caller: Create a ResultQueue
- caller: create message ((withdraw, payer, amount), ResultQueue) and place on payer's F_inbound queue.
- caller: wait for message on ResultQueue
- payer: receive message from F_inbound queue
- payer: perform contents of method withdraw(self, amount) - put result in "result"
- payer: if an exception is raised put (exception, None) on ResultQueue
- payer: if an exception is not raise put (0, result) on ResultQueue
- caller: if result is (exception, None), rethrow exception
- caller: if result is (0, result), then the result is stored in "funds"
- caller - create message ((deposit, payee, funds), ResultQueue) and place on payee's F_inbound queue.
- caller: wait for message on ResultQueue
- payee: receive message from F_inbound queue
- payee: perform contents of method deposit(self, amount) - put result in "result"
- payee: if an exception is raised put (exception, None) on ResultQueue
- payee: if an exception is not raise put (0, result) on ResultQueue
- caller: if result is (exception, None), rethrow exception
- caller: if result is (0, result), then the result is discarded, and the function exits
This then allows us to create a MischiefMaker. Our MischiefMaker will be given two accounts - their own and a friends. They will then repeatedly transfer random amounts of funds out of their friends account. They'll also keep track of how much money they've grabbed from their friend.
An example of tracing the logic here might be this:
- Barney/Fred balances: 1000,1000
- Barney grabs 250, Freb grabs 250 - Barney/Fred balances: 1000,1000
- Barney grabs 250, Freb grabs 250 - Barney/Fred balances: 1000,1000
- Barney grabs 250, Freb grabs 250 - Barney/Fred balances: 1000,1000
- Barney grabs 250, Freb grabs 250 - Barney/Fred balances: 1000,1000
- Barney grabs 250, Freb grabs 250 - Barney/Fred balances: 1000,1000
- Barney grabs 500, Freb grabs 250 - Barney/Fred balances: 1250,750
- Barney grabs 500, Freb grabs 250 - Barney/Fred balances: 1500,500
- Barney grabs 500, Freb grabs 250 - Barney/Fred balances: 1750,250
- Barney grabs 500, Freb grabs 250 - FAILS, Barney gives up. Fred then continues.
The upshot here is that both Fred and Barney are grabbing what they think is alot more than 1000 each, even though there's only 2000 in circulation. This seems a bit counter intuitive, but when you consider the banking system does essentially operate this way - just with more actors - it makes more sense.
So the MischiefMaker code looks like this:
class MischiefMaker(Actor): def __init__(self, myaccount, friendsaccount): super(MischiefMaker,self).__init__() self.myaccount = myaccount self.friendsaccount = friendsaccount self.grabbed = 0 @process_method def process(self): try: grab = random.randint(1,10)*10 transfer(grab, self.friendsaccount, self.myaccount) except InsufficientFunds as e: print "Awww, Tapped out", e.balance, "<", e.requested self.stop() return self.grabbed = self.grabbed + grab
As before, this should be fairly clear. We keep track of accounts, and transfers occur bidirectionally as quickly as possible.
account1 = Account(1000).go() account2 = Account(1000).go() fred = MischiefMaker(account1, account2).go() barney = MischiefMaker(account2, account1).go() wait_for(fred, barney) account1.stop() account2.stop() account1.join() account2.join() print "GAME OVER" print "Fred grabbed", fred.grabbed print "Barney grabbed", barney.grabbed print "Total grabbed", fred.grabbed + barney.grabbed print "Since they stopped grabbing..." print "Money left", account1.balance, account2.balance
When we run this, all 4 threads are free running. Fred grabs money, Barney grabs money, and the fact withdraw and deposit are actor_functions ensures that the values in each account are valid at all points in time. The upshot of this is that when the simulation ends, we started with a total of 2000 and we finished with a total of 2000. Snipping the now substantial output somewhat:
INITIAL 1000 INITIAL 1000 WITHDRAW 90 910 WITHDRAW 50 950 DEPOSIT 90 910 DEPOSIT 50 950 WITHDRAW 20 980 WITHDRAW 10 990 DEPOSIT 20 980 DEPOSIT 10 990 WITHDRAW 100 900 WITHDRAW 90 910 DEPOSIT 100 910 DEPOSIT 90 900 ... snip ... DEPOSIT 50 850 WITHDRAW 90 810 DEPOSIT 90 1100 WITHDRAW 30 780 ... snip ... WITHDRAW 10 100 DEPOSIT 10 1890 WITHDRAW 30 70 DEPOSIT 30 1900 WITHDRAW 20 50 DEPOSIT 20 1930 Awww, Tapped out 50 < 100 GAME OVER Fred grabbed 27560 Barney grabbed 28350 Total grabbed 55910 Since they stopped grabbing... Money left 50 1950 Ending money 2000
The thing I like about this example incidentally is that it shows Fred and Barney having very large logical incomes from each other, whereas in reality there was a fixed amount of cash. (Essentially this means Fred and Barney are borrowing from each other, much like banks do)
Conclusion
Not only can concurrency be dealt with sanely - as per Rob's point, it can also look nice, and be developer friendly. If you extend the actor model to include actor_functions, complex problems like concurrent update can become clear to work with.
In a later post I'll go into the internals of how this is implemented, but the description of how the transfer method operates should make it clearer that essentially each actor serialises actions upon it, ensuring that actor state can only be updated by one thread at a time.
Links to the three examples:
If you find this interesting, perhaps give it a try at some point. I personally find it a more practical approach - especially when dealing with things that are naturally concurrent.
Comments welcome.
This week's Interesting Links
March 08, 2014 at 03:11 PM | categories: interesting, OpenGL, easteregg, links, stuffread | View CommentsIn no particular order, some things I read and thought worth blogging about for some reason. I may start doing this as a regular thing - as opposed to twittering everything. Sometimes these will be links put here as aide memoire to come back to rather than as bookmarks.
Easter Egg
Fun little easter egg from google. Don't Blink!
Modern Open GL
Many moons ago I wrote (in SML) a tree growth simulator that allowed you to define an L-System in a custom DSL. which then modelled a tree's growth in a voxel space, based on contents of the voxels. Finally itspat out drawing instructions as another custom DSL to drive a renderer. The renderer used Open GL (written in C), and was my first real usage of Open GL. Fun it was too. Since then Open GL has changed somewhat, and I've heard good recommendations for these next two links.
First of all a set of tutorials:
Slides from University of Texas - interesting in that it also covers the evolution of OpenGL rapidly:
Kids' coding challenges
Launched by Young Rewired State - this is a collection of coding challenges for kids. Essentially a collection of projects which can give some direction to learning coding. After all, coding is a means to an end, so this provides a collection of "ends" to head towards.
It's worth noting that this is actually a competition and everything and that there will be awards and everything at Buckingham Palace.
The actual challenges though look suitable for everyone from 8-80 who likes having a bit of fun.
Take a look have a go. Help a kid you know have a go.
What would Travolta Call you?
Not everyone watches the Oscars ceremony these days, especially given the timezone difference and the fact its a PITA to find. However, when people goof up, it goes round the internet at lightspeed, and this happened this week with John Travolta calling Idina Menzel by the name Adele Dazeem. Very silly goof. Anway, Slate were quick off the mark with this silly little web page that simply asks you to enter your name and tells you what John Travolta (or Jan Thozomas as we like to call him) would call you.
Me ? I'm Marcel Speerce.
If you missed the sillyness, the above also has a link to the video
8 Reasons why Programmeers make the best ...
Fun little observations from Emma Mulqueeny - best known for founding and running Young Rewired State.
TED Playlist on Creativity
I've included this because it looks interesting, but I've not watched the videos yet. As I say, a playlist on creativity.
General
-
http://time.com/12786/the-new-barbie-meet-the-doll-with-an-average-womans-proportions/ - Someone decided to find out average women's proportions, and make a model based on it. (This was after making a 3D model online which was well recieved) It's an interesting thing.
-
http://timkastelle.org/blog/2014/03/you-should-start-a-blog-right-now/ - Bunch of reasons as to why you should blog and what you should blog about.
-
The BBC News website soft launched the responsive design version of their website, and there was a discussion on hackernews regarding it. Quite an interesting discussion. -- https://news.ycombinator.com/item?id=7346176
-
Interesting web/JavaScript based diagramming system - http://www.gojs.net/latest/learn/index.html
Programming Related
The following 3 links looked interesting this week for various reasons as they popped past the websites I poll periodically.
-
http://en.wikipedia.org/wiki/Reactive_programming - This is a general term for system that perform computation when things happen. The simplest type that many people are familiar with is spreadsheets - change numbers formulas update, the numbers change. However it also applies to lots of other domains.
-
http://cppquiz.org/ - I use C++ an awful lot at work at the moment, and it's one of those languages that gets more and more complex every year, and lots of edge cases making developer lives harder. For me to only sane way to use it is to define a subset you're going to use and still with that. In order to understand why someone would reach that conclusion, take a look at this quiz. This probably also explains why python remains my favourite language!
-
https://gist.github.com/hrldcpr/2012250 - Clever trick in python using defaultdicts.
Recent Blog Posts
A little known fact about the BBC News website is that the first 4 paragraphs and title of the news story used to be used in the creation of CEEFAX pages. In that spirit, I've started making my (except this sort) posts such that you can get the gist of them from the introduction - which may be up to 4 paragraphs long. (My paragraphs are longer than BBC News's though and I don't have an editor :-)
Recent posts:
-
Changing Communications - http://www.sparkslabs.com/michael/blog/2014/02/26/changing-communications Short version - switched away from twitter back to blogging when using my real name, with good reasons.
-
Guild - Pipelinable Actors - http://www.sparkslabs.com/michael/blog/2014/03/07/guild---pipelinable-actors-with-late-binding Guild is a python library for creating thread based applications. Guild actors have late bindable methods to allow pipelining and lots of the fun stuff that Kamaelia used to have, but with the pleasantness of Actor style syntactic sugar.
Enjoy!
Guild - pipelinable actors with late binding
March 07, 2014 at 11:51 PM | categories: open source, python, iot, bbc, actors, concurrency, kamaelia | View CommentsGuild is a python library for creating thread based applications.
Threads are represented using actors - objects with threadsafe methods. Calling a method puts a message on an inbound queue for execution within the thread. Guild actors can also have stub actor methods, representing output. These are stub methods which are expected to be rebound to actor methods on other actors. These stub methods are called late bind methods. This allows pipelines of Guild actors to be created in a similar way to Unix pipelines.
Additionally, Guild actors can be active or reactive. A reactive actor performs no actions until a message is received. An active guild actor can be active in two main ways: it can either repeatedly perform an action, or more complex behaviour can use a generator in a coroutine style. The use of a generator allows Guild actors to be stopped in a simpler fashion than traditional python threads. Finally, all Guild actors provide a default 'output' late-bindable method, to cover the common case of single input, single output.
Finally, Guild actors are just python objects and actors with additional functionality - it's designed to fit in with your code, not the other way round. This post covers some simple examples of usage of Guild, and how it differs (slightly) from traditional actors.
Getting and Installing
Installation is pretty simple:
$ git clone https://github.com/sparkslabs/guild
$ cd guild
$ sudo python setup.py install
If you'd prefer to build, install and use a debian package:
$ git clone https://github.com/sparkslabs/guild
$ cd guild
$ make deb
$ sudo dpkg -i ../python-guild_1.0.0_all.deb
Example: viewing a webcam
This example shows the use of two actors - webcam capture, and image display. The thing to note here is that we could easily add other actors into the mix - for network serving, recording, analysis, etc. If we did, the examples below can be reused as is.
First of all the code, then a brief discussion.
import pygame, pygame.camera, time from guild.actor import * pygame.camera.init() class Camera(Actor): def gen_process(self): camera = pygame.camera.Camera(pygame.camera.list_cameras()[0]) camera.start() while True: yield 1 frame = camera.get_image() self.output(frame) time.sleep(1.0/50) class Display(Actor): def __init__(self, size): super(Display, self).__init__() self.size = size def process_start(self): self.display = pygame.display.set_mode(self.size) @actor_method def show(self, frame): self.display.blit(frame, (0,0)) pygame.display.flip() input = show camera = Camera().go() display = Display( (800,600) ).go() pipeline(camera, display) time.sleep(30) stop(camera, display) wait_for(camera, display)
In this example, Camera is an active actor. That is it sits there, periodically grabbing frames from the webcam. To do this, it uses a generator as a main loop. This allows the fairly basic behaviour of grabbing frames for output to be clearly expressed. Note also this actor does use the normal blocking sleep function.
The Display Actor initialises by capturing the passed parameters. Once the actor has started, it's process_start method is called, enabling it to create a display, it then sits and waits for messages. These arrive when a caller calls the actor method 'show' our its alias 'input'. When that happens the upshot is that the show method is called, but in a threadsafe way - and it simply displays the image.
The setup/tear down code shows the following:
- Creation of, and starting of, the Camera actor
- Creation and start of the display
- Linking the output of the Camera to the Display
- The main thread then waits for 30 seconds - ie it allows the program to run for 30 seconds.
- The camera and display actors are then stopped
- And the main thread waits for the child threads to exit before exitting itself.
This could be simplified (and will be), but it shows that even though the actors had no specific shut down code, they shut down cleanly this way.
Example: following multiple log files looking for events
This example follows two log files, and grep/output lines matching a given pattern. In particular, it maps to this kind of command line:
$ (tail -f x.log & tail -f y.log) | grep pants
This example shows that there are still some areas that would benefit from additional syntactic sugar when it comes to wiring together pipelines. In particular, this example should be writable together like this:
Pipeline( Parallel( Follow("x.log"), Follow("y.log"), Grep("pants"), Printer() ).run()
However, I haven't implemented the necessary chassis yet (they will be).
Once again, first the code, then a discussion.
from guild.actor import * import re, sys, time class Follow(Actor): def __init__(self, filename): super(Follow, self).__init__() self.filename = filename self.f = None def gen_process(self): self.f = f = file(self.filename) f.seek(0,2) # seek to end while True: yield 1 line = f.readline() if not line: # no data, so wait time.sleep(0.1) else: self.output(line) def onStop(self): if self.f: self.f.close() class Grep(Actor): def __init__(self, pattern): super(Grep, self).__init__() self.regex = re.compile(pattern) @actor_method def input(self, line): if self.regex.search(line): self.output(line) class Printer(Actor): @actor_method def input(self, line): sys.stdout.write(line) sys.stdout.flush() follow1 = Follow("x.log").go() follow2 = Follow("y.log").go() grep = Grep("pants").go() printer = Printer().go() pipeline(follow1, grep, printer) pipeline(follow2, grep) wait_KeyboardInterrupt() stop(follow1, follow2, grep, printer) wait_for(follow1, follow2, grep, printer)
As you can see, like the bash example, we have two actors that tail/follow two different log files. These both feed into the same 'grep' actor that matches the given pattern, and these are finally passed to a Printer actor for display. Each actor shows slightly different aspects of Guild's model.
-
Follow is an active actor. It captures the filename to follow in the initialiser, and creates a placeholder for the associated file handle. The main loop them follows the file, calling its output method when it has a line. Finally, it will continue doing this until its .stop() method is called. When it is, the generator is killed (via a StopIteration exception being passed in), and the actor's onStop method is called allowing the actor to close the file.
-
Grep is a simple reactive actor with some setup. In particular, it takes the pattern provided, compiles a regex matcher using it. Then any actor call to its input method results in any matching lines to be passed through via its output method.
-
Printer is a simple reactive actor. Any actor call to it's input method results in the data passed in being sent to stdout.
Work in progress
It is worth noting that Guild at present is not a mature library yet, but is sufficiently useful for lots of tasks. In particular, one area Guild will improve on in - specifying coordination more compactly. For example, the Camera example could become:
Pipeline( Camera(), Display( (800,600) ) ).run()
That's a work in progress however, adding with other chassis, and other useful parts of kamaelia.
What are actors?
Actors are threads with a mailbox allowing them to receive and act upon messages. In the above webcam example, it has 2 threads, one for capturing images, and one for display. Images from the webcam end up in the mailbox for the display, which displays images it receives. Often actor libraries wrap up the action of sending a message to the mailbox of an actor via a method on the thread object.
The examples above demonstrate this above via the decorated methods:
- Display.show, Grep.input, Printer.input
All of these methods - when called by a client of the actor - take all the arguments passed in, along with their function and place on the actor's mailbox (a thread safe queue). The actor then has a main loop that checks this mailbox and executes the method within the thread.
How does Guild differ from the actor model?
In a traditional actor model, the code in the camera Actor might look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | import pygame, pygame.camera, time from guild.actor import * pygame.camera.init() class Camera(Actor): def __init__(self, display): super(Camera, self).__init__() self.display = display def gen_process(self): camera = pygame.camera.Camera(pygame.camera.list_cameras()[0]) camera.start() while True: yield 1 frame = camera.get_image() self.display.show(frame) time.sleep(1.0/50) |
- NB: This is perfectly valid in Guild. If you don't want to use the idea of late bound methods or pipelining, then it can be used like any other actor library.
If you did this, the display code would not need any changes. The start-up code that links things together though would now need to look like this:
display = Display( (800,600) ).go() camera = Camera(display).go() # No pipeline line anymore time.sleep(30) stop(camera, display) wait_for(camera, display)
On the surface of things, this looks like a simplification, and on one level it is - we've removed one line from the program start-up code. Our camera object however now has its destination embedded at object initialisation and it's also become more complex, with zero increase in flexibility. In fact I'd argue you've lost flexibility, but I'll leave why for later.
For example, suppose we want to record the images to disk, we can do this by adding a third actor that can sit in the middle of others:
import time, os class FrameStore(Actor): def __init__(self, directory='Images', base='snap'): super(FrameStore, self).__init__() self.directory = directory self.base = base self.count = 0 def process_start(self): os.makedir(self.directory) try: os.makedirs("Images") except OSError, e: if e.errno != 17: raise @actor_method def input(self, frame): self.count += 1 now = time.strftime("%Y%m%d-%H%M%S",time.localtime()) filename = "%s/%s-%s-%05d.jpg" % (self.directory, self.base, now, self.count) pygame.image.save(frame, filename) self.output(frame)
This could then be used in a Guild pipeline system this way:
camera = Camera().go() framestore = FrameStore().go() display = Display( (800,600) ).go() pipeline(camera, framestore, display) time.sleep(30) stop(camera, framestore, display) wait_for(camera, framestore, display)
It's for this reason that Guild supports late bindable actor methods.
What's happening here is that the definition of Actor includes this:
class Actor(object): #... @late_bind_safe def output(self, *argv, **argd): pass
That means every actor has available "output" as a late bound actor method.
This pipeline called:
pipeline(camera, display)
Essentially does this:
camera.bind("output", display, "input")
This transforms to a threadsafe version of this:
camera.output = display.input
As a result, it replaces the call camera.output with a call to display.input for us - meaning that it is as efficient to do camera.output as it is to do self.display.show in the example above - but significantly more flexible.
There are lots of fringe benefits of this - which are best discussed in later posts, but this does indicate best how Guild differs from the usual actor model.
Why write and release this?
About a year ago, I was working on a project with an aim of investigating various ideas relating to of the Internet of Things. (In particular, which definition of that really mattered to us, why, and what options it provided)
As part of that project, I wrote a small/just big though library suitable for testing some ideas I'd had regarding integrating some ideas in Kamaelia, with the syntactic sugar in the actor model. Essentially, to map Kamaelia's inboxes and messages to traditional actor methods, and maps outboxes to late bound actor methods. Use of standard names and/or aliases would allow pipelining.
Guild was the result, and it's proven itself useful in a couple out projects, hence its packaging as a standalone library. Like all such things, it's a work in progress, but it also has a cleaner to use version of Kamaelia's STM code, and includes some of the more useful components like pipelines and backplanes.
If you find it useful or spot a typo, please let me know.
Changing Communications
February 26, 2014 at 07:45 PM | categories: personal, meta, blogging | View CommentsGiven the break, you might think that I'd given up blogging - I'd actually shifted mode of communication. I'm sort of switching back, this post is about why. For those of a 'too lazy; don't read' disposition, the short version is I had switched largely to Twitter, but due to a family and friends harassment by a stalker, I've recently abandoned it. Anything using my real name will posted here, and that will largely be tech or professional type stuff largely. Anything else - be it personal, friends, volunteering, etc will be under a pseudonym, entirely unconnected to my real name.
Background
As noted, I'd switched over to systems like Twitter. This seems like a great idea after all - everyone else in tech uses it, it's where all the cool discussions happen. What could possibly go wrong?
Without beating around the bush, I and my family have what is best described as a stalker. Like most stalkers, this person is well known to us, and like many stalkers, use any avenue they see as a route to abuse. That's why this site use a static blog system. It's why comments here are pre-moderated. It's why switching to Twitter was a bad idea in retrospect.
I'm not about to go into details here as to who, what, why, but will touch briefly on how, where and when.
Living with Stalking
The 'when' is essentially on-going over a period of of about eleven years. Sometimes there is a break - due to courts being kind and giving us some piece and quiet - but these tend to end - suddenly, without warning, and in entirely unwelcome ways. Then the abusive contact tends to be extremely intense for a period of weeks, months or longer. (Yes, we've involved courts at times, but there's practical limits - especially if the person involved is willing to libel judges)
The 'breaks' lull you into a false sense of security. Maybe this time they'll see the insanity of their actions and just leave us alone. However, you can repeat 'leave us alone' over and over again, dozens of times and they simply won't get the message. The point here is as a result, you can be completely lulled into a false sense of security, and then you get a contact from your stalker, and BAM you're instantly back into that frame of mind, as if there was no break at all... (Last year, the longest break was a couple of months)
Yes, this is very wearing, but it is what it is.
The 'where' is literally anywhere they think they can get hold of you. When we lived down south, this person knew where we lived. After one set of court protection was up, in order to gain a few weeks extras grace, we went on holiday abroad. When we got back, we put the house on the market, since this person had previously turned up our doorstop. We sold our house, and moved into rented accommodation. When the opportunity to stay working with the R&D came up, but move 200 miles away, we took it (a win-win scenario). Overall, in order to protect our privacy, we've moved house 3 times.
Online - GoogleStalk
When it comes to online however, every year it's harder and harder to stay unfindable. If you're in tech and want to stay connected, having some kind of public presence is almost obligatory. In professional circles, can you avoid linkedin? Innocent thanks for helping with community projects and youth groups leave an online trail. In short, the only real way to stay unfindable is to only use a pseudonym for everything online. Any system that requires your real name (which you must be searchable under even if you're 'also' allowed a pseudonym), well that's off limits. (Google, I'm looking at you) You also can't talk about things that could link back to you.
If you are findable, then this provides your stalker with information. Information our stalker twists round in their head to be used against me, and others who care about me. I could go into details, but I'd rather not.
The point is though but when you run your own server, and you see accesses to your website daily, at all sorts of time the day, and night (1, 2, 3, 4, even 5am) for months, and you have positively verified those accesses them, then it's a healthy paranoia. (Much like walking on pavements rather than the middle of the road is a healthy paranoia)
Online Stalking Consequences
To be forced only use a pseudonym online to avoid this is the equivalent of being forced to don complete head to toe disguise when you leave door. It's the online equivalent of being under permanent house arrest, because anything you do or say can and will be used against you as abuse. It's like being forced into the actions of being an agoraphobic without being one. You can't join in conversations with friends, because it will be used against you. You can't express delight at the actions of your family and shout it to the world, because it'll be used against you.
You can't really even talk about your work of things you do there except very carefully, in a very circumspect way. This is a problem of course. Like many places, at work one of the expectations we have is that we'll do things that are public, talk about our work publicly and openly, and I worked for many years to make public release of our code simple, and normal. A huge part of working with open source, again, public participation. If it's to do with work, this has to be under a real name, largely because in practice, otherwise people simply don't trust you.
So why break radio silence on this blog now ?
Well, as noted, I switched communications styles nigh on 2 years ago. I guess I switched to Twitter because it's more informal, friendlier, more personal, relatively immediate. While it can cause misunderstandings, they can be resolved quicker too. Also, back then, Twitter appeared to be heading down the right path (though they've veered in the wrong direction). I'll cover the details on that another time, since Twitter was the tool of abuse, not the abuser.
I mention Twitter though because in the lead up to Christmas our stalker broke one these silences for whatever reason, and once again making our lives hell (which continued throughout the Christmas and new year period incidentally off twitter). This time though I'd gotten used to talking to friends on Twitter and complained about the stalker's actions to my friends on Twitter, preserving the stalker's privacy. I gained kind supportive messages from people I know as friends, professionally, from work, and from the open source community.
At this point however, this individual decided to start sending abusive messages to these kind supportive people. By responding to my comments about the stalker's actions the stalker felt they had the right to attack them. Now it's one thing to attack me, but another to attack my friends, colleagues, etc.
My immediate action (after reporting the abuse, again) was to switch my account to protected - meaning only those who were following me could see my tweets. The stalker's account was already blocked, but I had reason to believe they were using another mechanism to track tweets. (The account that they were tweeting from didn't follow anyone, it was solely used for abusive purposes)
I then came the conclusion that the way I had been using Twitter no longer viable, and that leaving tweets there would just leave me to more abuse, so I deleted all my tweets from previous 2-3 years. Every single one. I did let my Twitter followers know what was going on (in more general terms than this post!), but to honest I doubt more than a handful really noticed, though some definitely did, and were very supportive.
I then had to decide 'what next', which of course means 'what now?' (Given Twitter is now untenable)
While I took that action nearing 3 months ago, and it does seem to have its intended effect (remove that twitter id as a target), we received an ongoing barrage throughout that time. As a result, I've been deliberately slow to decide what next. (Also, better things to do than deal with this crap)
What now?
Short version: real name for professional (or similar) stuff, or where using my real name matters, and only used where I have reason to believe I'm able to sufficiently trust the system.
Pseudonym for everything else. Neither account to ever reference the other.
Things with my real name on, will only go on systems where I can premoderate replies. As a result, my Twitter feed will switch over to bring largely unused, except for posting links either to this blog or things I read that are interesting, though I may well instead save those for a weekly interesting links blog post.
On the flip-side, not having my pseudonym linked to me, my name, or my employer should be interesting. While out doesn't mean complete freedom of speech, it will allow me to say more than I've said in a long while.
So, if there's no online link between the two (or more :-) identities, how will my friends know it's me talking to them? Simple: I'll tell them, happily and freely. I know I can trust them not to link the pseudonyms to me. Get in touch if I don't :-)
It sucks to have to do this, but the alternative is to cease using the internet as a social medium, and that would such even harder.
Nesta Report - Legacy of the BBC Micro
May 23, 2012 at 11:21 PM | categories: BBC, programming, reports, nesta, kidscoding | View CommentsEarlier in the year we visited MOSI - which is (these days) a Manchester branch of the Science Museum. Anyhow, one of the long standing exhibits is of the Manchester Baby - or more specifically a restored version of the Baby which has been lovingly restored with many original parts. The Manchester Baby is significant because it was the worlds first stored program computer, and was switched on and ran it's first programs in June 1948.
The exhibition it was in was in a celebration of computing, including the BBC Micro. They had a survey, which would feed into a study on the legacy of the BBC Micro, you popped your answers in and they'd forward you the results if you were interested. I ticked the box, and duely today recieved the following email:
Thank you for responding to the BBC Micro survey that the Science Museum conducted in March 2012. We received 372 responses which is amazing given that the survey was only open for a couple of weeks. Many people left detailed responses about their experiences of using computers in the 1980s and the influence it had on their subsequent careers paths.As you took part in the survey I thought you might be interested to know that the Nesta report on the Legacy of the BBC Micro has been published today and is available to download here: www.nesta.org.uk/bbcmicro
It's an interesting report, and from what I've skim read it seems to be a good report. If I had to pick out one part, the final chapter - "A New Computer Literacy Project ? Lessons and Recommendations" in particular is something I think is very good food for thought, and motivational.
In particular it makes 7 recommendations which I'll aim to summarise:
1. A vision for computer literacy matters
The report singles out John Radcliffe as a single motivating force in the original Computer Literacy Project as being a key driver in the success in the past. it goes on to say:
- Today, there is no single vision holder creating the partnership between industry, education
and the range of actors who want to change things. {...} personal leadership and vision, based on
a level of knowledge and understanding of the industry that inspires respect, and backed up
by significant skills in diplomacy, is vital at this point in time.
I'm not sure I agree on the point of regarding a lack of a vision, but possibly agree on their being no single vision.
2. We need a systemic approach to computer literacy and leadership.
The key points brought out here is that the reason why the CLP worked last time around was due to there being efforts at many levels - all the way from grass roots, all the way to high level government buy in, which included resources and that the key reason for success was the many micro-networks that existed, which were assisted in co-ordination (in part) by the BBC's Broadcasting Support Services. (which I don't think have a modern equivalent) Crucially though, individuals and local groups had to take ownership and did.
It makes two interesting points:
- A key part of this plan will be empowering local and regional bodies to take part and own this new movement for computer literacy. Organisations leading this effort need the commitment, the capability, the skills and the scale to deliver. At present there are many candidates, but no one organisation that meets all these criteria. Someone needs to step up to the plate.
In particular, it then does on to single out the BBC, but with some caveats:
- The BBC may be able to replicate their previous role, by providing supporting television and
online programming, but to deliver at the scale of the original CLP would require not only
significant buy-in at a senior level but significant institutional re-engineering and resources.
This final caveat of buy-in at a senior level, and buy in with resources is an important point. After all, it's one thing for someone to say "this is a good idea" and another for someone to say "yes, this matters, here's resources" - which may only be time and space.
3. Delivering change means reaching homes as well as schools.
While I really like the last part of this recommendation because it really says what should be done clearly and succinctly...
- We need a contemporary media campaign: television programmes that
legitimise an interest in coding, co-ordinated with social media that can
connect learners with opportunities and resources. This is a vital but
supporting role, arguably still best played by the BBC.
... it gets there by making a point a point which I emphatically agree with - that the focus of this must reach out into the home, rather than just into schools. In particular it notes a point I've made repeatedly:
- Despite the BBC Micro now being considered ‘the schools machine’, the computer was actually
more important in its impact of changing the culture of computing in the home, particularly
through the legitimising effect of the television programmes.
The key point brought out is that it's this aspect - which boils down to engaging children in their time - that really matters.
4. There is a need to build networks with education.
Again, the executive summary is nicer/kinder than the final chapter, and in particular calls out a real issue from the BBC side of things:
- At the core of the original CLP were the BBC’s educational liaison officers {...} These two-way networks provided an invaluable way of both listening and delivering appropriate tools and training for adult, continuing and schools education. {...} These days, the BBC has neither the equivalent staff nor the resources to play such a role.
That's pretty harsh. However, it's interesting to note that there are some groups actively looking at improving this sort of thing. I think however there's a long way to go based on what I've seen. There's misunderstanding about how schools work, how academia works, how industry really builds good systems/works, how the BBC operates today, etc.
On a positive note, people are talking, listening and trying to figure out how to move forward - which IMO supports this recommendation.
5. There are lots of potential platforms for creative computing; they need to be open and interoperable.
This point really brings summarises the issue I've described before of "ask a hundred engineers what the BBC Micro of Today would be and you get a 100 different answers". Furthermore though it points out that the reason why the Micro worked was because it helped provide a clear common platform for talking about things.
Today the issue is that we have a collection of micro-development environments which aren't joined up, either in practical terms or conceptually, but if they were more joined up, even in just a conceptual framework for discussing things we could make progress. Without this, it makes the task of teaching this in schools hard, and even harder in the home for someone to say "where do I start?"
It's probably worth noting at this point that there is a project which may result in just such a common platform or framework. It's a project I've helped spec out, and Salford University are hiring someone to work on it. It's target is really to do with "the internet of things", but is spec'd out in terms of pluggable micro-development environments. It's my hope that this could help head towards a common platform spec. We'll see.
6. Kit, clubs and formal learning need to be augmented by support for individual learners; they may be the entrepreneurs of the future.
Again, this reiterates the need for support outside schools and notes that last time round the informal learning was more influential than formal. This relies belies something quite incredible - it succeeded primarily because people found it fun and had sufficient support.
Not only this it specifically points out that these things need to go beyond after-school clubs, and that whilst those are good, these run the risk of following a route of formal learning rather than the motivations that encourage longevity. The former could be said to be "I am doing this because I've been told I should", whereas the real motivator is "I am doing this because it's fun and I want to". I think this is one place where the executive summary is punchier than the main recommendations:
- There is a need for supporting resources that can develop learning about computers outside
the classroom. These may be delivered through online services or social networks, but must
bring learning resources and interaction, not just software and hardware, into the home.
7. We should actively aim to generate economic benefits.
This recommendation points out a handful of things:
- The CLP built upon pre-existing skills in industry
- That this stimulated a market - that boosted that industry
- That that boost also then led to much larger take up that you'd expect otherwise
- That this then drove the industry further down the line
It notes that whilst the third and fourth points may be the goal, that there's real benefits from the second point, and this time around any boost could from international opportunities as well as national and regional, and that including economic benefit is a good idea.
Personally I think that final point is more important than people realise. I still remember to this day hearing about a kid writing a game in his bedroom and getting paid £2000 for it. This was 1982/1983 or so, and that was an absolute fortune from my perspective, and the idea that you could get paid for creating something fun was an astounding thought. So you'd get something fun out of the process. You'd have fun making the thing. And you'd get paid for doing it. A bit like the idea you have tea testers excites people who like tea. You don't get interested in tea because you'll get paid for it, but it's an intriguing thought. Likewise, you don't write code because you'll get a fortune - you probably won't. What you might get to do though is something which is fun that you can live on, because it's outputs are useful to people. That's a pretty neat combination really.
Anyway, I hope the above points have whetted your appetite to read the whole thing - it's an interesting read. Go on. You know you want to.
Hack to the Future (H2DF), Preston, 11/2/12
February 19, 2012 at 05:49 PM | categories: kidscoding | View CommentsLast weekend (11th Feb) I was at Hack to the Future (H2DF) in Preston, organised by Alan O'Donohoe and held at his school - OurLady's High School, Preston. What's that I hear you say ? Well, the blurb for the event was this:
-
What is Hack To The Future?
It is an un-conference that aims to provide young digital creators aged 11 - 18 with positive experiences of computing science and other closely related fields, ensuring that the digital creators of today engage with the digital creators of tomorrow.
We plan to offer a day that will inspire, engage and encourage young digital creators in computing/programming ... -- [ from here, snipped ]
Originally, I'd intended to attend as just someone interested in this area, and perhaps putting on a session (it was an unconferences after all) teaching the basics of something. In the end I helped BBC Learning Innovation in a more official way (ie wearing my BBC id, etc), helping them test out a bunch of ideas, created some tutorials, acted as a mentor/guide on the day, and yes, taught a bunch of people young and old about various aspects of coding.
How the Day Went
In short, brilliantly. Very tiring, very long day, but very rewarding and learnt lots.
After an initial quiet period when everyone was setting up, and they were having welcoming sessions, we started getting a few visitors. They were made to feel welcome, and we chatted about what we were doing, and very quickly kids were sitting at laptops, learning to hack (in many cases) their first bits of code.
There were six laptops in use at any point in time, and we were after that initial start then very busy for the entire day. (Only stopping briefly for lunch) 3 devs who'd worked on the prototype BBC Learning were testing, and a few helpers from BBC Learning and one from BBC Childrens, and myself were all acting as mentors working through the tutorials I wrote as well as the prototype's built in tutorial.
Some learnt to bling up a simple hello-world style web app. (Which you can think of as being like a e-greeting card really) Others extended the quiz with different questions and answers The vast majority I worked with extended the image editor to add a collection of user interface elements, model elements and actual filters for flipping the image, inverting colours, embossing it, brightening it, or changing the HSL values, as well as changing mosaic tile sizes. (These used the Pixastic library)
Given these were done in 20-30 minute chunks by small groups of 1-4 (depending on how many came round), I think this was a great result. Crucially, I think kids took away the fact that you could do this, and that all you need is time, a text editor and a browser (at least for javascript)
As well as the reward of writing some task, each child that attempted it received a BBC Owl Badge, and a BBC Owl cloth patch, and some other bits and bobs.
Primarily because I had the documentation for the Pixastic library open in a browser window on a second desktop, I tended not to use the platform BBC Learning created, choosing to use my laptop instead. The other 5 groups though used the platform all day long, with as much success as I had with the kids.
For the large part, the children were doing the driving - they did the typing, they decided what they would code, how they would code it and why. I purposely stood behind the screen with them in front of the computer, making it clear it was their opportunity. I'd feed some information about code structures - by pointing at code printouts, and then guide by asking questions about "Where do you think X is? How might you add Y? Can you see anything similar to what you're after ?", but otherwise left all the decisions to them.
At the end of the day, some key takeaways for me were:
- Whilst tools are needed, they only need to be sufficient to get started.
- What is far more important is content - tutorials. The more of these that can be written, the better.
- To emphasise this, there were a number of teachers asking for the tutorial materials and asking when/if they could be made available. I'm hoping sooner rather than later. ( Since I wrote them on my own time, I could just publish them, but they'd be less likely to be found if I do that.)
- This need makes perfect sense when you consider that many teachers - especially primary school - are not specialists in coding. A handful may have these skills, but nowhere near all, so there's clearly somewhere support could be given.
- A huge level of demand of interest from children - of all ages - even the very youngest who attended both wanted a go, and also "got it" when walked through.
- An unexpected level of interest from digitally literate adults who want to do the same, BUT with the huge caveat of not having the time/task orientation
- To me, it confirmed that the issue isn't just coding, but in being able to create controlled automation of intellectual tasks. (Akin to Nintendo's focus on "play" and "games" rather than on "gamers")
It made me feel that there is a need for compelling, useful, interesting and sufficiently featureful and sufficiently understandable content, code, examples and metaphors. (My tutorials/examples filled that gap for the day, but there's a genuine wider need) That content can't be written by non-experts but needs to be written or translated perhaps for complete novices.
I'm painfully aware of many many good things being done in this area, but is it reaching those in the teaching profession? My impression, based on reactions I saw, is "not really". There's going to be a variety of needs there I think, both in terms of pulling together existing resources and in terms of creating new resources, but also in terms of getting the resources in the places they need to be.
On this note of "relevant support material", it's perhaps worth noting that I also took along my inspiration for my tutorials - in the form of my collection of kids coding books from the 80's...
This turned out to be quite useful. After all, these days programming languages are generally focussed on giving instructions to computers in ways that make the programmer's life easier. This really means communicating in a way that other developers understand. Being able to show kids books which were kids books back in the day, which actually assumed that teaching machine code was a sensible and plausible thing to do (and it was) really helped in showing them "yes, you can".
For me, this aspect of opening eyes to "yes, you can", and then seeing the light go on in children's eyes when they realised "Yes, I really can! I made it do THAT!" was fantastic.
Adults!
Also, there were a small handful of adults who came in asked "How do you do that ?". I found this both interesting and useful in that I could clearly explain to them, but also learn why they wanted to know. What I've learnt from that encounter could easily take up a whole blog post in itself. However, some key takeaways:
- Whilst kids can spend hours focussed on playing with something - given a good enough motivation, generally speaking adults simply do not have the time to do so.
- Adults need task oriented tutorials, that actually match the tools they have available in practical terms
- Recording a macro as a script for later editting/driving the app may be enough.
- But too many apps don't allow this, or it's inconsistent between apps or difficult to use or even find.
- This task orientation is also more like what happens in a class room where you also don't have time to spare.
- Having the usborne books reminded some "yeah, I remember that, you went 10, 20, 30 and had to leave space for 11,12,13". This means they learnt the ABC's but never had a route to actually use those ABC's.
This makes me think two key points:
- There's potentially also a big need/market for task oriented computational tools among adults. (This helps explain the enduring popularity of spreadsheets despite their problems)
- When talking about computational literacy, it's not just children.
BBC And Hack to the Future
(Caveat: In this bit I'm talking about another part of the BBC, I've probably got something wrong here :-)
If you scan through the list of attendees at Hack to the Future, you'll see a number of attendees there from the BBC, and specifically "Innovations, BBC Learning". This is the bit of the BBC that tries to figure out the things that BBC Learning could commission. ie They try out things BBC Learning could do, see what works, what doesn't and based on all sorts of criteria suggest specific things that BBC Learning could do. They're also the bit of the BBC that asked Keri Facer to explore interest & possibilities.
So, ask a 100 engineers, you get a 100 different answers. Some people thought that a Micro 2.0 would be hardware - either Arduino/mBed/BeagleBone like, RaspberryPi/Tablet/Netbook/Net Top like, a pure webapp, through to downloadable Apps, etc. Unless you try something you won't learn.
So BBC Learning Innovations decided to try something - specifically "can we build a kid friendly IDE that enables them to get started using Javascript & HTML5". Now, Javascript is far from my favourite language, but I can see the logic behind picking the language, and it's both unavoidable and ubiquitous - in a similar way that BASIC was 30 years ago. (Well, things actually, but I'm going to focus on what was taken to H2DF for this post) Heck, faced with a similar problem 14 years ago, I reached similar conclusions.
So, Parmy has been leading a small dev team to test out the idea, and built a very simple IDE by taking Eclipse and either skinning it, removing chunks of functionality, or rebuilding from constiuent components (not sure which approach :-). The upshot is one idea of what a BBC Micro 2.0 could look like. However, it IS a testable, concrete, real thing that you can put in front of kids and see if it helps.
Someone reading this is likely react to any/all of the decisions in various ways... but as I've said privately: it's a typical geeky thing to go "how can I improve this hammer that I have?" which I think is a good reaction, but I really don't want to overshadow the "We've got a Hammer! We've got a Hammer! And we can do This! and This! and This! " thing that is going on. . (Once you have a hammer, you can see what you can do with it and whether you want that, a nail gun or screwdriver :-)
What did this platform actually look like though? Well, I happen to have a screen grab from a screen cast created by Parmy, which I feel no qualms copying here since it was demonstrated to dozens of people at Hack to the Future:
Hack to the Future's organisation was happening concurrently with this, and it looked like a pre-pre-pre-pre-pre-alpha version of the codebase would be ready for the event, so the BBC Learning Innovations team decided to take the platform to H2DF to try it out. My understanding of the plan behind this was:
- Inspire, engage and encourage young digital creators in computing/programming
- Take the downloadable platform there and see whether kids could use it
- What's the level of interest from kids, teachers, and professionals ?
- Was that sort of platform a good idea ?
- Should it be a downloadable app?
- What's missing?
- Is this even a good idea ?
- etc.
ie all lots of very good reasons.
I think the day also went a very long way to achieving either the goals, or answers to the questions listed above.
Like any time you take a new piece of code on the road, there were teething issues:
- Power was at the side of the room, so rearrangement of the room was a clear necessity
- The tutorials assumed a working network connection, and the school's wifi was too locked down
- The platform's codebase was very much pre-pre-pre alpha, ready only days before the event. Whilst for basic "hello world" tutorials it worked one way (which I expected), it worked a very different (more eclipse like) way for non-hello world tutorials. This threw me somewhat - I use Kate and commandline tools normally. (Parmy and 2 other devs who'd been working on the codebase were there as mentors as well, so this was fine for them :-)
- When I did use the platform I found it worked, but a little rough around the edges. For example it currently masks errors. I'd flagged this up before, but I guess there hadn't been time to resolve this before the day.
(Given this, you could ask, "why take such pre-ready code on the road?", the answer of course being it was a priceless opportunity to answer the questions above)
I pondered whether to list these, and then figured a warts and all thing was a good idea, primarily because mentioning warts makes it easier for everyone to avoid them in future.
What I did in prep for H2DF
As background, I've been recently trying to figure out how BBC R&D (the dept I work for) can help BBC Learning with their goals (BBC R&D's job, in short, is to support the R&D needs of the full spectrum of what the BBC does). So on a limited time basis I've been spending time with them, and helping out however I can. After all, IMO, first step to anticipating someone's needs is to understand them and their goals, and mucking in is one way to learn that :-)
In this case, it struck me that the best way I could help out with testing the ideas was to:
- Help out on the day as a mentor
- Write tutorials & cribsheets
- Provide a mental model about how to teach building basic browser/client side based web apps.
So I wrote 2 tutorials:
- A javascript based quiz, using a little bit of jquery to download the quiz and update the screen
- The other was a simple image editor/manipulator using the pixastic library to pull in filters.
My aims for these tutorials were:
- To be sufficiently interesting/useful to be engaging.
- To be sufficiently simple to be comprehensible by kids who'd never coded in a very short time period. (Each session was to last 20 minutes)
- To be also sufficiently clear to be able to be modified by kids
- To make it clear that it was possible to do so, and that the modifications reflected "real" coding
- To be no more than around 100 lines long, including HTML, style sheets, and javascript.
- For each to teach a basic structural principle by example. Eg
- The quiz demonstrated a data driven game engine, obtained by an ajax call, with a staged user interface
- The image editor really demonstrated how to add user interface elements dynamically, how (and why) maintaining a model of the user interface was a good idea, and how to interact with library of functions.
- Both were designed to show common structural elements common to client side based web apps.
This is a trickier balance than it might seem, and both took a procedural view of coding rather than object oriented approach. (Once you've demonstrated the value of having a model, I think you have better justification from the learner's perspective for increasing the code complexity in favour of flexibility)
The reason I took this approach is, in addition the the old Usborne books, probably attributable to Dave Hunt of the pragmatic programmers. There's a great book he wrote called "Pragmatic Thinking and Learning" which describes lots of learning models. One of them is called the Dreyfus Model of Skill Aquisition, which has 5 levels:
- Novice - Follows Recipes
- Advanced beginner - Adapts Recipes
- Competent - Understand when recipes don't work, can plan their own
- Proficient - Want the big picture, self correct and self improve, understand context (why you don't do that ?!)
- Expert - As well as all above, work from intuition. eg "I think X has this illness, I'll run these tests". This is the inverse of earlier levels of skill.
The key point really, is that many computing books are really focussed on competent and proficient and very few on the true novice. By contrast if you look at the old Usborne books, they they were packed with what are effectively recipes. In part this is also why I picked the Pixastic library - its documentation is entirely geared towards the novice or advanced beginner.
If you're writing documentation or tutorials, I really think you should be thinking "which level am I targetting?" and "am I addressing the boundary - how do I enable someone to leap from one level to the next?". Clearly on Saturday though I was targetting Novices, and enabling to jump over the first hurdle of "I can't" to "I can".
So, along with writing the code, I produced A3 PDF's describing, in the style of those old Usborne books, what the code did and why. These were deliberately that size to encourage readability by groups, and used friendly fonts :-) Sadly, I didn't have time for little hand-drawn robots or manga cats. (I think using manga cats would be more gender neutral :-)
I'll probably blog about the analogy I used on the day for web apps at a later point in time.
Where Next ?
What BBC Learning do next is very much up to them, and I'll support them in their plans.
This whole event though did get me thinking about Micro Development Environments - MDE's - as opposed to Integrated Development Environments (IDEs).
- The platform we had was a micro development environment for javascript based apps
- The Arduino development environment is a micro development environment for a particular type of embedded system
- Microsoft's Gadgeteer platform is another, for a similar type of system. Microsoft's Kodu, again, is another with a different focus.
- Rapsberry Pi, has the possibility of being either a micro dev environment, or as a host environment.
- Yousrc is a flash based / basic like language for creating web embedded games and similar, again a micro dev environment.
- Scratch, is another.
- Play My Code, is again, another Micro Development Environment, targetted at making it fun to write games for the browser. Personally I think Play My Code falls into the category of "most likely to hook kids and those who are kids at heart". Really neat :-)
- And so on.
In essence there are a growing number of Micro Development Environments, and each focusses (rightly) on a particular domain and has its own strengths. Personally, I think "there next", along with tutorials focussed on the different micro dev environments, and perhaps looking at "what micro dev environment is missing?".
Since H2DF I've spent, a couple of evenings in the past week I've tried knocking up what could be a web version of the BBC platform, incidentally, since it looked like a reasonably plausible thing to do, and I think this looks pretty nice as a possible browser based dev platform:
That allows you to load 2 different apps, and the source for the MDE, edit them, and run them. (No saving, but you could copy and paste to a local editor). The fun thing is that as well as running on my laptop that it runs on a Kindle and even in my (non-iOS, non-android) phone's browser. There are downsides to this approach, but does no-download outweigh that as well?
Early days to say the least for that anyway, and more for a blog post another day.
Thanking Alan, AKA The Perils of Hoaxing
My closing note is really to thank the irrepressible Alan O'Donohoe for organising this Stone Soup. If he hadn't hoaxed everyone at Barcamp Media City and again at Pycon UK, then it's very likely that certain reports around H2DF wouldn't've been taken as a hoax as well. I think the lesson there is do it once, and make sure everyone understands it was a hoax and why, and people will be happy. Do it twice, and when something happens that sounds the same then it might be assumed to be a hoax. I think that's a real shame because what he organised really was amazing, and hopefully people will forgive him the hoaxes now :-)
Also, whilst I thought the hoaxes were a bad idea, but the more I heard about the detail he went to - setting up stooges in his audience for example, the more I can't help but admire the audacity. I don't agree with it, but I can't doubt Alan's determination, impishness, and element of wanting to bring fun to the table.
So many thanks to Alan for organising H2DF, it was lots of fun, we all learnt lots both adults and especially the kids :-)
Yes, he cried "The BBC at my school!" twice, but the third time, yes, the BBC really did go to his school and like many others there, I wish we'd had an IT/Computing teacher like him when I was at school ;-)
Continuity
I'm painfully aware that I haven't really followed up on my last blog post about "what I'm actually going to do", but rather been "doing it". This is probably a good way round, and my next post will pick up from the last. ex-Scout's honour. :-)
This is also because a fair chunk of what I was planning has been done by the Play My Code team. Go take a good look and play with their site.
Observations on weightloss
December 12, 2011 at 07:50 PM | categories: health, first world problems, weightloss | View CommentsAs noted in my earlier post, I've been losing weight this year.
Results
In particular, to give some numbers here, this is how my weight has changed over the course of this year:
- 16/01/11 - 117.1 kg
- 13/02/11 - 113 kg
- 13/03/11 - 110.9 kg
- 17/04/11 - 109 kg
- 15/05/11 - 107.4 kg
- 12/06/11 - 105.7 kg
- 17/07/11 - 101.2 kg
- 14/08/11 - 98.9 kg
- 18/09/11 - 99.3 kg
- 16/10/11 - 98.2 kg
- 13/11/11 - 95.6 kg
- 11/12/11 - 93.9 kg
So, clearly, it's working :-)
Method
In summary I've done this by:
- Keeping a detailed food diary - using rednotebook. I've used it to count calories. The purpose behind this was self-education about how many calories different foods really contain.
- Calculating my RDA once per week using this formula
Calories = ( 10 * weight in kg + 6.25 * height in cm - 5 * age + 5 ) * activity factor
(The formula for women is - (10 * weight in kg + 6.25 * height in cm- 5 * age - 161) * activity factor )
Note this takes into account activity levels. Fat people (like me) tend to have a very low activity factor. ("I have a low metabolism" is usually asserted without it being tested, and often wrong). - Aiming to eat about 75% of RDA per day, at most, but not worried too much if I go over that once or twice a week, and taken care to go over 85% as little as possible and not go under 60% as much as possible.
- Measuring the weights of food I have no idea of their weights to calculate RDA, used smaller plates, and bowls that hold known, more sensible, portion sizes.
- Snacking primarily on protein rich/calcium rich/low fat foods to control hunger/snacking urges. (babybel light primarily)
All of this is explained in detail in the earlier post.
Crucially, two things I have not done:
- I have not particularly changed my exercise levels. I walk to/from the station, and cycle occasionally in good weather in summer. I do have much more energy now though. Or rather I probably have the same amount of energy, but more is available for things other than shifting my weight around.
- I have also not denied myself foods. Yes, I've had cake, pizza, chips etc. I've just learnt what a sensible portion size looks like.
Observations
Based on this, I've got sufficient data - 47 weeks' data - to create a heatmap / scattergraph based on:
- Amount of calories "over" eaten per week relative to RDA - -25% means 75% of calories eaten. I've picked "over eaten", because it puts RDA down the middle of the graph. These are not daily averages, these are relative to the weekly calorie allowance.
- Amount of weight gained
Which enables me to examine the reasonable hypothesis - does eating more that RDA correlate with weight gain, and under eating correlate with weight loss ?
Well, the graph below is based on those 47 weeks' data.
To me, what that says is pretty clearly - yes, over eating relative to RDA does correlate with weight gain, and undereating RDA does tend to correlate with weight loss.
Looking closer, if you over eat over RDA, there are 6 data points all showng weight gain. If you look at the 85% point, there are 5 datapoints which relate to weight loss, and 7 that relate to weight gain, and between 85% and 100% being 1 with gain and 5 with loss. Below 85% there is 1 datapoint with weight gain and 34 with weight gain.
Some tentative conclusions
Whilst correlation does not imply causation, let's presume it's a reasonable conclusion.
If that's the case, this means we can derive some guiding principles from this, ordered by how useful I think they are:
- Weeks where you eat consistently less than 85% of RDA will result in weightloss.
- If you're feasting (eg christmas, easter), then feast, then stop. The worst thing you can do is (for example) feast solidly for 2-3 weeks in a row. The alternative is to replace the usual foods you would eat with the christmassy foods if you just want to eat christmassy foods over an extended period.
- It's the long term (week/fortnight) RDA calorie count that matters.
- Short term (2-3 day) calorie restriction (eg "detox diets") or calorie binges (eg family birthday party weekend) may have a short term effect, but no long term positive/negative effect.
- There's no apparent benefit of going below 70% of RDA.
- Weeks exceeding total RDA for the week will result in weight gain.
- Working on a weekly calorie budget is easier/less restrictive than a daily denial based allowance.
- Weeks with weight gain, will gain about a pound or two (0.5-1kg)
- Weeks with weight loss, will lose about a pound or two (0.5-1kg, average 0.7kg)
- If you've piled on weight, day in day out, for 2 years, it will take you 2 years to lose it.
I'm not claiming that these conclusions apply to others, but based on the numbers, I do think that they apply to me.
Slightly Silly, But Nice Observation
There's one final point, according to RDA I "should" have eaten 792,235 calories. I've actually eaten 638,000 calories (about 80% overall), and lost 23.2kg. That's 154235 calories, or 6648 calories per kilo. That's about the same amount of calories as in a kilo of sunflower margarine. There's makes a rather nice parallel.
So, under eat by a tub of margarine, and lose a tub of margarine-worth of weight. Very literal, and very true.
What do you call this diet then
I've been asked by a few people what diet I'm doing. I think I call it the "putting less in one end than comes out the other diet". Bit gross but pretty accurate :-)
That or just "I'm doing the eating less / not stuffing my face diet".
Does also mean I can just enjoy food, as long as I make it count. But that's pretty fun, 'cos food is fun :-)
YOU are Needed for Singing In The Rain 18/11, 1pm, MCUK
November 11, 2011 at 04:29 PM | categories: fun, work, silliness, children in need, charity | View CommentsLast month saw the first of a monthly series of BBC North internal brainstorms, and we were asked "what can we do for a Children in Need event", "how can we use the piazza", "how can we get people involved", etc. Anyway, our little group had a conversation that went along the lines of it rains, how can we make the rain a feature, let's do singing in the rain.
Well, it looks like my while ideas can sometimes be a little off the wall that this time it hit the mark...
As a result you're cordially invited to a fun bit of silliness.
They've created a little video to teach people the moves in advance as well, so take a look :-)
(Those who came to see The Crash of The Elysium will recognise the room above as being the same room the "Exhibition" was in...)
Please come along, get involved and have fun ! :-)
How : Weight loss 21kg, 6 BMI points from 34 to 28
October 27, 2011 at 10:30 PM | categories: health, first world problems, weightloss | View CommentsFor those that know me, most of you unlikely to have ever known me not to be fat - even though I grew up chronically underweight. If you've known me for a while, you've probably seen me try and lose weight, and may've seen me when I lost around 10kg back near the start of 2007. Whilst that was the lightest I'd been in probably 12 years even then, that weight came back on with a vengence afterwards though. This was probably due to using a "don't eat this, that, other diet". The problem there is it's natural to stop. 2007 was a hard year (as was 2008/2009) for personal reasons, and a denial diet simply doesn't work then.
Anyway, for the past few years I'd tried making a new years' resolution to try and be lighter at the end of the year, than at the start of the year, and to try and lose weight each month. That resolution has each year failed.
This year I resolved not to make any new years' resolutions. However, around a week into the year, BBC 1 (or 2) repeated a show called "10 things you need to know about losing weight". Unlike many recent BBC science programmes, rather than banging on 1 point for an hour, it spent 5 or 6 minutes on each of the 10 points, running mini-experiments on each. Each to illustrate a point. Back at the time I was impressed at the level of content and how much they backed up the content that I sky+'d it and made some detailed notes. (I'm basing this post on those notes from then and also thoughts since)
The short summary of the 10 points is as follows:
- Don't skip meals - Skipping meals makes your hungrier, but also means the brain responds more (measured using an MRI scanner) to higher calorie foods - meaning when you eat next, you eat more than you would otherwise. Demonstrated with people painting a bridge.
- Change your plate size from 12 inches to 10 inches - Studies with popcorn packets found people finished around the same proportion of their popcorn, whatever size the popcorn packet was. Simply using a smaller plate leads can lead to eating 22% less and not noticing. People with more food available eat more.
- Count your calories - Demonstrated that most people can't guess what food has most calories using - Two sandwiches; Muffin; Few small squares of chocolate; Large plate of potatoes, lots of greens, chicken breast.
- Don't blame your metabolism - Monitored what a fat actress was eating by using a camera/video camera and food diary. On the surface she was eating sensibly, but when added up was eating around 1000-1500 more calories per day than she should. She also had her metabolism measured (which she blamed), and in fact her metabolism was normal - she was just eating too much.
- Protein staves off hunger pangs - Showed what happens when the stomach shrinks as it empties of food. Chemical called ghrelin gets released - to tell brain stomach is empty. Eating protein appears to block reception (or reduce production) of ghrelin, meaning you don't feel (as) hungry.
- Thick soup keeps you feeling fuller for longer - They made a chicken, rice and veg meal for two groups. Both were served with a glass of water - one was blended with the food to form a thick soup. Both groups had their stomachs sizes scanned using ultrasound. Those with the glass of water had the water absorbed the food quicker - meaning their stomach shrunk back in an hour or two - than those with the water blended in. That means the soup people felt fuller for longer.
- The wider your choice, the more you eat - This was a fun demo - two bowls of smarties, place somewhere saying "eat me, free". One has a single colour of smarties, the other has lots of colors. The multicolour ones go first. The implication here is at a buffet you're more likely to go back because of the "have you tried the ..." effect.
- Low Fat Dairy helps you excrete more fat - Some research shows that calcium levels in dairy bind well with the fat in dairy, meaning it's more likely to pass through you "all the way" implying low fat dairy has "spare" calcium. Demonstrated this by feeding a volunteer two known diets for a week, one week high dairy, both with same fat content. Week with high dairy led to higher fat in faeces, meaning unlike other week it hadn't been absorbed.
- With exercise, the afterburn is what matters - Measured metabolism immediately after exercise and 10-12 hours later. Found that the effect of exercises isn't necessarily how long you exercise for, but whether it's sufficiently long enough. Essentially, if you use up your carbohydrate reserves, the body "burns" fat until it's had a chance to rebuild its carbohydrate reserves again. Point being that a bit regularly was better than rarely and lots.
- Keep moving to lose weight - Possibly the most lightweight part of the programme. Made the point that it's possible to integrate a bit more exercise if you look at journeys to/from work and how you work.
As I say, each point was backed up with the help of tools to measure metabolism and plausible demonstrations/recreations of various pieces of research.
So, for me the key things were:
- Know how many calories there are in things, don't base portions on crockery size.
- Low fat dairy (or stuff with calcium in) is useful.
- Don't get over hungry - manage your ghrelin levels using soup, protein and eating regularly.
- You need to exercise enough, and with some thought can build it into your day. How much exercise is "enough" depends on the person.
This really boils down to - eat a variety, eat low fat dairy, count calories, don't over eat, and build some exercise into your day.
New Year's Resolution
As a result, I resolved to start counting calories - NOT to change what I was eating.
My reasoning here was simple - if I had no idea what I was eating in terms of calories, how would I know when I'd reduced them? The other reason is simple - it's not a denial based thing - it's something that allows me to keep on doing it whether or not I'm under or over eating.
Around the same time I'd been looking around for something to use for taking notes, and came across rednotebook. Rednotebook is organised like a diary, in that there's a page per day, and uses a very simple markup for writing notes. It provides simple tag and word clouds, along with a simple full text search. I've been using it all year.
Counting, Measuring and Monitoring
Anyhow, based on that I started writing down everything I was eating or drinking. For things involving ingredients I've weighed things in advance, and then found a container that holds that much and used that as a measure since. After a little while I realised that, like many people, I fall into food habits. This naturally leads to it being quicker to write things down and their calories.
So, have I measured every day all year ? No. There have been 2 periods this year when I've not done that. The first was near easter when one of our cats was run over. It hit us as a family hard, and I really didn't want to do anything. Since it was easter, I estimated those two weeks intake in terms of calories based on previous weeks. The latter was around my birthday, and it's totally impractical to count every sweet - so I took the tins of sweets, found out the calories in them and divided them out over the days I ate them - again as a realistic guestimate.
The final step though is that I'm a geek. I figured if I'm going to be OCD about it in order to learn how many calories there are in different foods and learn better habits, the more I monitor things the better. As a result, yes, I've dumped it all a spreadsheet and been using that to monitor things. On the one hand, I felt a bit dirty for using a spreadsheet - after all, shouldn't I be using a decent programming language, but on the other I've gained an insight into why some people use spreadsheets as much as they do.
So, to give you an idea of what this gives me - over the past 284 days I've eaten 551,332 calories, or an average of 1941 calories per day. For comparison, if I was eating the recommended daily allowance (RDA) for calories I "should" have eaten 688,708 calories, or an average of 2425 calories per day. That means I've "undereaten" by 137376 calories. Given I've lost about 21kg of weight, that means for each 3000 calories I've undereaten I've lost a pound of weight.
Anyone who's looked at lots of diets will "know" that this matches roughly what people say, but it's interesting (to me) to see it stand out so starkly.
So, why have I stuck with this? Have my habits changed ?
Changing Habits Requires Understanding
I think the reason why I've lost weight is for a number of reasons, from previous diets:
- Whilst it's possible to eat 3000-4000 calories in a single sitting, you don't automatically gain a pound of weight.
- Doing something like Atkins shows you there's a "burn in" period for diets to work.
- Doing a little exercise for a day or two has less effect than regular exercise.
- Measuring weight daily is pretty pointless, since changes you measure relate to the weight of the food you ate. (Takes food up to 48 hours to go through the digestive tract)
- So, working on a daily basis for intake is probably a bit silly too.
- Eating too little generally results in piling on weight afterwards
So, I decided to find aformula for calculating RDA, and use that to calculate a weekly figure. There's a few out there, but the one I'm using is this:
Calories = ( 10 * weight in kg + 6.25 * height in cm - 5 * age + 5 ) * activity factor
Where activity factor is:
1.200 = sedentary (little or no exercise) This is the figure I pick
1.375 = lightly active (light exercise/sports 1-3 days/week, approx. 590 Cal/day)
1.550 = moderately active (moderate exercise/sports 3-5 days/week, approx. 870 Cal/day)
1.725 = very active (hard exercise/sports 6-7 days a week, approx. 1150 Cal/day)
1.900 = extra active (very hard exercise/sports and physical job, approx. 1580 Cal/day)
(The formula for women is - (10 * weight in kg + 6.25 * height in cm- 5 * age - 161) * activity factor )
So, in practice I do this:
- Calculate an RDA figure as above, once a week.
- Set a "target" usage of 75%. Going over this is fine, going under is fine, though if at the end of the day it's below 60% eating some toast or cheese or whatever to bring it up to a minimum of 60% . This target will probably shift to 85% as I get closer to my target weight.
- To treat this like a budget and average things over a 7 day period.
- This means some foods are more expensive (in terms of calories) than others. If go over budget one day, going under on another makes up for it. If you go over all week, with money you'd need an overdraft. With calorific overspend, you get an over draught - as you belly over hangs your belt and gets a draft.
- No foods are off limits - remember my resolution was to count calories, not restrict them.
This combination does mean that I have to count everything, and that's a bind. However because no foods are off limits, I can make a guess of "is this OK", followed by later estimating the actual amounts. The upshot of this is immensely freeing.
Snacks
I found in early days that I snacked a lot, and that whilst individual meals were generally OK, some of the snacks weren't. Some examples of snacks (and what they're equivalent to)
- A few biscuits and a chocolate can easily hit 400 calories (vs a good lunch at Nando's)
- A venti latte with shot of syrup, and a sticky bun - 800 calories (vs a deep pan cook at home pizza)
- Flapjack - 400 calories (vs see above)
- Packet of Doritos - 1300 - (vs a large deep pan cook at home pizza)
Clearly I wouldn't eat all those things often, but the point being is that I'd snack differently on different days, but ultimately, I'd snack. In part this is because I'd skip breakfast, or skip lunch. One of the killers here though is that many many "cheap" (or snack) foods are high in carbohydrate, sugar and fat, but low in protein. The sugar and fat make them more-esh, the carbs make you think they're filling, but the effect wears off, and the lack of protein means that they don't really satiate you. The upshot is high (but empty) calorie intake which just makes you gain weight.
Changing Behaviours
What I did is I decided to work with the way I ate rather than against, though in a more informed way.
- I found out the volume that cereals of different types came up, because buying a bowl that was approximately the recommended portion size. (A "normal" cereal bowl - eg the sort given away for free - is about 4-6 portions(!)). Since then for each new cereal, I've weighed the volume of cereal that fills the bowl. This means that I never need to weigh that cereal again, and can just calculate based on that.
- The sort of bowls I'm using for cereal are glass and about 8cm across. This sounds small, but you rapidly get used to it, and seems about right.
- This means I get breakfast every morning, for between 230 - 300 calories depending on what I have.
- As a result, when I get hungry at lunchtime, I go for lunch. If it's the canteen, I'll have the soup which contains meat (for the protein). If I go to the Lowry Mall, I'll get a baked potato with cottage cheese (for the same reason). Soup + bread is around 250-300, potato +cottage cheese is about 350.
- Before we moved to MCUK, I'd get a "lighter choices" ready meal from Tesco express and cook it in the microwave at work, since that'd be about 400 calories.
- This leaves me then with an average of 1000-1200 calories for evening meal, which is more than enough for a good meal.
So, for me snacking was caused by two things - habit and hunger. The latter is managed by:
- Regular meals
- Eating 1 or 2 babybel light cheeses when I'm hungry and a meal is "too far off" - low fat, high protein - to suppress the ghrelin effect.
- Actively choosing snacks which will reduce the hunger effect.
Regarding reducing habits, I've dealt with that by buying puffed wheat (like sugar puffs which haven't had anything added - no sugar, syrup etc). I've then tried flavouring that with things like peri-peri salt, or with sucralose type addition. That gives something bulky to snack on. If it's late at night, I'll either have a babybel or toast, etc. Also, if I really feel like a sweet snack, I'll have something like a Cadbury's brunch bar - because it's a got clear indication of calorie cost, but it's also not so small you think "I'll have another". Simply being aware of eating has made me snack less.
Clearly I still go out to the shops, but if I have a coffee whilst I'm out, rather than having a latte, I'll have an Americano. Similarly, if I'm having something to eat there, instead of a cake, I'll either get a small pack of mini muffins or wafers to share, or a cheese toastie. The former would be for snacky habit reasons, the latter is if I'm peckish/hungry. (And a cheese or ham/cheese toastie again is protein rich)
Similarly, there's been other revelations - for example Nando's can be a remarkably healthy lunch or meal out as a treat - having rice instead of chips, having a side salad, and similar can mean that it's lower calorie than a "meal deal" of sandwiches, crisps and coke from boots.
Exercise?
From the way which I'm monitoring things, it's clear that exercise helps. However it rains here a lot and I'm not going to go to the gym. Ever. As a result, the vast bulk of my weightloss hasn't had anything to do with extra exercise. That said, now that I'm over 20kg lighter, I do unsurprisingly find climbing the stairs to the 5th floor at work less of a struggle, cycling is easier, and walking from Old Trafford to work is quicker than getting the tram, so on nice days (or days where I must be there by a certain time), I'll walk from Old Trafford. (though there is a remarkably tempting fryup/cafe on that route..)
However, it's a misnomer IMO to say that you must exercise to lose weight. What would be worse is to start exercising, and not change your diet. Primarily because when you stop exercising your diet will still be just as bad, and you'll put all the weight back on. Given when I was between the ages of 11 and 14 I'd be cycling 20 miles a week, rising to 36 miles a week between 15 and 16, and then to about 115 miles/week between 16 and 19.
Oddly enough, continuing to eat the same sorts of food I'd grown up to eat between 11 and 19, when cycling, dozens of miles a week (or day) results in significant weight gain with a sedentary lifestyle. It's just a pity that it took me so long to realise that the habits I gained growing up didn't really fit my adult lifestyle. (I was chronically underweight as a teenager/young adult - probably due to the amount of cycling)
So, whilst exercise is important, understanding how you're eating weight is more important, and especially the relationship between exercise and diet.
I'll note this though - cycling to work for a month before going on holiday meant that the decreased weight loss over the course of the holiday was offset by the increased weight loss when cycling. So being a fair weather cyclist is better than not cycling at all. Similarly, measure how long you can walk and how far.
When I started this, my limit for walking quickly was chunks of around 20-25 minutes. That hasn't really changed (I am still overweight by about 20kg after all). What has changed as I've lost weight is that my speed has improved. What used to take 20,25 minutes now take about 11 or 12 minutes.
Also, my BMI was 34. Someone thin with a BMI of 21 or 22 saying "hey just exercise more" hasn't got a clue. Start off doing something of 10-20 minutes that leaves you out of breath. That might be walking to the shops or walking to the station. Or walking to a different stop. But do it. It'll help start a virtuous circle. Try to do something every day if you can - including the weekend - even if it's just window shopping.
What else?
So clearly this approach is working. I note everything down, find out figures for foods from things like supermarket websites, from web searches, from packets etc. I put it all in a spreadsheet and use that to track a number of different things. In particular, I have 4 sheets:
- Daily - into this I put the total calorie intake, distance walked, distance cycled, and monitor that relative to 60/75/80/100% RDA, rolling 7 day average, rolling 7 day average of averages.
- Weekly - weight, weight delta, BMI, calories eaten, RDA calories, average weight loss (to estimate dates when will have lost weight)
- RDA Calcs - wraps up the formula above, and various notes/versions to make understanding better
- Monthly - RDA calories vs actual calories vs weight vs weight loss per month.
Some Findings
For these, as sad as it may sound, I've also been graphing these to see what trends emerge.
- It's clear that exercise does increase the rate of weightloss if your calorific intake stays constant
- Eating more calories than RDA over the course of a week causes weight gain, of an average of 0.7kg or about 1.5lb. There's no particular correlation of "amount of calories over eaten" vs weight gain. It seems that there's a limit as to how much weight the body can put on at any point in time.
- Eating fewer calories than RDA over the course of a week causes weight loss, of an average of 0.7kg or about 1.5lb. Again, there's no particuar correlation of "amount of calories undereaten" vs weight loss. Again, it seems that there's a limit to how much weight the body will lose at any point in time.
- The thing of 3000 calories per pound of weight loss does seem to be true. Interestingly, that's also about the same as the amount of calories in a pound of margarine (eg Stork). Worth pondering.
What that means of course is that if you're going to over-eat for whatever reason - be it celebrating someone's birthday, christmas, etc the best approach is to actually feast - ie if you're going to go for a big meal, go for a BIG meal. If you're given a tin of sweets for a birthday, don't try and spin it out, binge on them. You'll have more fun (the reason the person gave you the sweets), and by eating them quicker it'll have less of a long term impact.
Likewise, this also means that if you want to lose weight, you actually have to do it for at least 3-4 weeks to lose any real weight, and even then only expect to lose just under 1/2 stone (nearly 3kg/6lb). Anything else is probably just emptying of your digestive tract and water loss.
The other slightly depressing observation though is this - it'll take at least as long to get the weight off as it took to put the weight on in the first place. I've been doing this "diet" now for 284 days. For me to reach my target weight will be another 280 or so days. Thankfully, the "diet" I'm on allows me to have things like pizza, pasta, cake, cheese, meat, bacon, sausages, cheesecake, cakes, desserts, chocolate, mousse, yoghurt, custard, vegetables, chicken, salad, ice cream, hot dogs, and all sorts of similar foods.
Bottom line
If you want to lose weight count calories (for at least 6-8 months until you get a good feel for calorific cost), aim for 75% of RDA, and don't worry about occasionally going over, or under, unless you drop below 60%, snack on protein if you must (of ideally less than 80 calories), buy some smaller crockery to control portion sizes. (Really hard to understate how much of a difference that makes)
Closing thought
My BMI is now 28. This means I've lost 6 BMI points. My target is the middle of the healthy BMI range 22 - which means I have another 6 BMI points and another 9 months to go. I suspect the next 9 months will be harder than the first 9 months. If you're struggling to lose weight though, losing 3 1/3rd stone, 46lb or 21kg is possible, and doable without a denial diet.
Just focus eating sensibly, as much exercise as you're comfortable with and on the next 1kg or 1lb at a time. That's what I've done. Seems to work.
« Previous Page -- Next Page »