Hacking the Cortex: Thousands of Steps, Dozens of Falls.

Cortex Banner

A lot of the accepted wisdom with regard to learning holds that certain skills, such as language and walking, are learned because a “module” in the brain activates and allows us to learn that skill quickly and effectively. The assumption made in these models is that the brain has evolved to contain a set of rules which constrain that learning and accelerate the process. For example,  with language, we’re thought to have a Language Acquisition Device in the brain that contains a master set of rules for a universal grammar. How else, they argue, could we learn something as complex as language so quickly? How else, they argue, can we explain how the overwhelming majority of children learn to form grammatically correct sentences without explicit instruction in grammar?

Those theorists are flat out wrong. Learning occurs in tiny increments, with many, many errors along the way. Much of it happens unconsciously, so it seems “innate” to observers and to our own subjective experience. Because we don’t see explicit lessons, we believe that there are no lessons.

Children between the ages of 12 and 19 months average a total of 2,368 steps per hour. The physical design of human legs limits their range of motion to the point where learning to walk is relatively easy, as there are only so many ways you can move your legs. But even with that advantage, it takes millions of learning experiences to become proficient at toddling, let alone running, jumping, climbing, sprinting, and navigating obstacles.

The important statistic I want to get at for this article is that within those 2,368 steps, a child also averages 17 falls per hour.

That’s a lot of falling.

Anyone who has played Warmachine for any length of time knows how many falls are involved in learning this game. It is the single most complex and nuanced wargame I’ve ever had the pleasure of playing, but the learning curve is steep. The first few games you play are training wheels games, in which the demoing player goes easy on their opponent and allows them to learn what their army actually does. We slowly introduce the basic mechanics of the game. But once the training wheels come off, we warn them – “You’re going to lose a lot of games”. And they do.

You have to learn about Molik missiles and Snipe-Feat-Go, about Overrun and Goad angles. You need to learn about the true brutality of Cryxian debuffs and overwhelming hordes of infantry. And there’s always more, always another combo.

And that’s just learning what other armies do.

You need to learn the nuances and subtleties of how your own army plays, about how a tiny rules interaction can be exploited to turn your troops up to 11 or to really ruin your day. You have to get blasted apart by Dire Troll Bombers before you learn to spread your troops out, then someone shows you how spray angles can surprise you even then.

There is so very much to learn. And as we established in the previous article, you learn via practice. But there’s more to it than that.

The core process which governs all of human learning is Operant Conditioning.

If you ask about 80% of Psychologists (totally made up number, but a large majority) they will tell you that B. F. Skinner was “discredited” or that his findings have been “discarded”. The truth is that the rules of operant conditioning are some of the most well researched, well established effects in all of psychology. I’d personally argue that it’s the closest thing we’ve got to true First Principles, and that it’s going to wind up explaining a lot more than anyone ever thought it would. But that’s a whoooole other tangent and I’m getting off topic. The point is, this Skinner guy was legit.

Gaze upon his forehead, ye mighty, and despair.

Operant Conditioning works on the simple principle that behaviour is shaped by the consequences of our actions. Learning is selectionistic in the same way that evolution is selectionistic, but instead of selecting for reproductive success, learning selections for behaviours that are reinforcing. 

In simple terms (because this is my psychological wheelhouse so I’m having to restrain myself from Extremely Unnecessary Detail), when we do a behaviour that is reinforced, the probability of use doing that behaviour again is increased. When we do a behaviour that is punished, the probability is decreased.

How is this relevant to Warmachine? On the surface level, it’s pretty obvious – when we do something that loses us a game, that behaviour is punished. Get to close to Molik? Bam, instantly punished. We learn not to do that pretty quickly.

The problem is that our basic learning processes are pretty bad at judging cause and effect. The stimulus that happens right before the consequence is taken to be the cause. Getting to a higher level of Warmachine play requires the ability to analyse potential causes that happen long before the consequence of losing. In a straight up assasination game, behaviour that caused the consequence is often obvious – you put your caster inside the threat range of a model that could kill them. That’s why learning threat vectors tends to happen early on, and smart learners learn to ask “what’s the threat range of X?” or “do you have any effects which increase charge ranges” when they encounter a new army. But in games that go to attrition or scenario, big causes can get missed. You can lose an attrition game because of how far you ran on your first turn, but still not take enough losses that you realise that that was the only real mistake you made, because the consequence of losing the game comes so much later. You can lose a scenario because you failed to grab a point top of two that you didn’t see was on, and then lose five turns later.

This topic is on my mind at the moment because I feel that’s one of the areas where I need to “level up” my game – to notice the early game errors which are making things more difficult for me to win later.

You can “fix” these kinds of flaws in your learning processes by being aware of its particular biases towards immediate causes. You can “rewire” your automatic learning by talking about what happened in the game, by being mindful of the effects of your early moves on the game.

The second way that Operant Conditioning folds into Warmachine is on the issue of takebacks. I’m all for takebacks when you’re teaching the game, or learning a new caster (for the first few games at least) – the objective is to learn to rules, not to learn to win, at that point. But consider takebacks in the context of Selection By Consequences.

Antecedent: You screw up

Behaviour: You ask for a Takeback, which is given.

Consequence: Your screwup no longer impacts the game (Reinforcing consequence)

And when a behaviour has a reinforcing consequence, it becomes more probable that it will be your response in that situation in future. You become more likely to ask for takebacks in future. If you change the above situation and remove the takeback, then the screwup has a punishing consequence in some way, shape, or form. What happens? The screwup (and you knew it was one, because you asked to take it back) because less likely to happen again in future.

Win win. Less likely to screw up, rather than more likely to ask for a takeback – which is a behaviour that can be bad in a tournament setting, as some people don’t like to give them, and that only leads to hurt feelings. Particularly if it’s a game ending screwup (and people are way less likely to give takebacks when you’ve just punted the game, even if a couple of minor ones have been given before. Where do you draw the line?)

From an operant conditioning perspective, takebacks are bad news in serious practice. I was on the side of allowing them in practice (to let the game unfold and really test the list) but as soon as I started to think about Operant Conditioning and Warmachine I realised my position was objectively less good than a “no takebacks” rule when serious practice was being engaged in. You want to reinforce good habits, and punish bad habits (Disclaimer: Punishment is far less effective in shaping behaviour than reinforcement. They’ll take away my behaviour analyst card if I don’t point that out very clearly. The best thing to do is to simply Not Reinforce bad habits, rather than actively punish them).

Good Night, and Good Luck


P.S. I was planning to talk about some of the potential pitfalls of practice and expertise here, but I felt I needed to talk a little about conditioning first, and of course it got away from me… Next week, I promise!

P.P.S. I think I need a new sign off. The only article I’ve written after nightfall has been my first, so it just feels weird to type…

6 thoughts on “Hacking the Cortex: Thousands of Steps, Dozens of Falls.

  1. I really enjoy your articles, but this one provokes me a bit – all in good fun of course 😉

    Your conclusion “asking for takeback in one game leads to asking for more take backs” without an social,environmental or emotional context seems a huge leap (yes I realize that you put strenght markers in there as well hehe – more probable/more likely etc.) . Asking for a takeback in a friendly game with your best bud, leads to asking for takebacks at the WMW finale?

    I see can what you are trying to say, and I agree with the “takebacks are bad for learning” idea, but the subject is so much more nuanced than Skinner can ever hope to explain (humans as well as takebacks!). Maybe its just my lack of stock in radical behaviorisme – but Skinner is too weak for warmachine sir! 😉

    Looking forward to more cortex hacking 🙂

    • It’s definitely true that context is a vital factor in learning, and that WMW and a casual game are very different contexts. (And Radical Behaviourism does account for context, particularly its more modern developments such as Relational Frame Theory) but a behaviour that is well established and highly probable in one context (casual games) is increasingly likely to generalize to similar contexts (local tournaments) and then further to less related contexts (WMW), particularly given that “A Game of Warmachine” is fairly solid overarching context. It is a little bit of a leap, but I did qualify with “more probable” – the universe is probabilistic to its core, and so is behaviour

      Skinner doesn’t get his due in modern psychology, but I’m strongly of the view that operant conditioning explains a hell of a lot more behaviour that anyone ever thought it would (For the big bugbear, language, look at Relational Frame Theory). The applicability of the operant to complex (context dependent, nuanced, social, cognitive and emotional) human behaviour isn’t immediately obvious, because it’s a simple principle, but from simple principles great complexity and nuance can emerge – evolution is the obvious comparison, and indeed a direct one that Skinner makes – Evolutionary Selection, at its core, is based on a few simple principles, but when applied to a complex environment the expressions of those simple rules are highly nuanced – behaviour is the same.

      That all said, I know it’s a controversial position. But Skinner is far from weak, good sir! 😀
      (EDIT: I love a good Skinner debate! More Skinner in the future!)

  2. I can see you have experience in the field of Skinner debate, during my studies behaviorism by itself was largely ignored since it dismisses cognition, so i have only ever had the mixed pleasure of reading Pavlov and Skinner, and not the modern developments of behaviorism you mention. As such im obviously not as informed on the subject as you are, which makes your argumentation interesting and good learning 😉 I rarely encounter behaviorism in praxis except for cognitive-behavior-therapy and Bandura*. *(How do you see Bandura fitting into the behavioristic scheme, he has at least some behavioristic tendencies in his theories of social cognition).

    I can see your point about operant conditioning, but in my practical experience real people are less “coded” and less subject to coding (not sure if coding is the correct word here – not used to debating these things in english hehe). Ive had patients with almost totally similar issues and one person changes behavior and the other doesnt even though changing behavior can save the person from loosing limbs, eyesigth or death. Take the context “a Game of Warmachine” you state that as being a “fairly solid overarching context”, where I think its a limited context and needs deeper complexity before we can begin to “train”/change behavior. Different emotions and cognitive relations play a role and can dramatically change the context. A game of warmachine between to good buddies (or budettes) on a friday night are wastly different from a final tables game against a complete stranger you think are kind of a dick.

    Onto the subject of takebacks. While playing my good buddy I make a huge mistake which will surely cost me the game (not counting dicerape) and end the game quickly – so I ask for a takeback. As long as win/loose are the only consequences of the game I will not object to your position about Skinner and takebacks, but what happens when both parties in the game just wants to play a good friendly (preferable long game = more time for beer) ? Or as you already covered are on a learning path (newbie or new caster etc). In your own example of:

    Antecedent: You screw up

    Behaviour: You ask for a Takeback, which is given.

    Consequence: Your screwup no longer impacts the game (Reinforcing consequence)

    The argument for more complexity would be that the takeback itself impacts the game, but on a cognitive and emotional level instead. You got a takeback, now you need to give him takebacks too, and if you dont you are a dick, also should you win the game .. is it really a victory, because you had that takeback?. (Personally I try to play without ever doing/accepting takebacks, it gives me the “feeling” that i owe my opponent, and lessens the excitement I gain from a victory). So instead of a reinforcing consequence, you just swapped one problem for another one, both impacting the game, both something to be avoided?

    Excellent deabte sir, and I apologize if my choice of words and grammar is a bit off 🙂 Maybe Skinner isnt weak, but Warmachine is just too strong ? ;P

    • Radical Behaviourism doesn’t dismiss cognition – Skinner’s focus on animal studies and simple learning processes was a pragmatic research decision, as he didn’t think Psychology had the first principles established to a level where cognition could be talked about meanigfully. The intention was always to get there eventually, which the field didn’t manage during Skinner’s heyday, but it’s recently made discoveries that have expanded the explanatory power of Behaviourism to a point where we have (what I think is) a very solid model of language and cognition in Relational Frame Theory (a good intro is Blackledge 2003 – http://www.pegahuman.no/BAT-34.pdf#page=61)

      The key thing is that Behaviourism considers thought/emotions/memory to just be another type of behaviour – subject to the same learning laws, but difficult to study, because the entire chain of antecedent, behaviour, and consequence can happen privately. (Behaviours can be antecedents and consequences of other behaviours, so thoughts are massively complex unfolding sequences of operants happening in private.)

      I definitely agree that applying the first principles of operant conditioning is very difficult in praxis – Behaviourists have had great success treating autism and developmental disorders in children because the therapist and parents have a sufficient level of control over the environment to enact meaningful behaviour change (by altering consequences).RFT also strongly informs Acceptance and Commitment therapy, which is slowly proving to be as effective as CBT in some situations (though clinical psych isn’t my focus, I’m a research/basic theory nerd through and through).

      On the takebacks issue, I agree that perhaps I oversimplified, as the different contexts can vary a lot – we’re reinforced by different things depending on whether the game is casual vs. competitive. I think the basic point still stands – over enough instances of being reinforced for takebacks, the behaviour of asking for a takeback is likely to become generalised to other games – but as noted in the first half of the article, it does take a lot of instances for a behaviour to shape itself strongly. My anti-takeback stance comes from the idea that it’s better to not nudge the probabilities in a direction that would slowly train you to ask for takebacks habitually. But of course, context is king. 😀

  3. You make a compelling argument for Skinners case I must say, so much that I have taken a break from making bases for my mercs army to look at Blackledge hehe. – It really is a shame that Skinner is normally just discarded so easily – especially since the argument we were given against behaviorism was exactly that it dismissed cognition and dealt only with transferring animal behavior studies to humans 🙂 Im used to working with behavior change in praxis in a clinical setting, but from a somatic diagnosis. We often treat mental health patients for somatic illnesses, and as such our knowledge and tools need to be up to date 🙂 So im gonna take a new look at behaviorism and RFT 🙂 Thank you sir.

    Also on the notion of takebacks I agree wholeheartedly, its a thing of evil .. but probably more for the reason that the takeback itself becomes a point of contention hehe – but I do see your point of it becoming “habit” and a bad one at that.

    Very interesting articles btw – I really enjoy them alot – keep going hehe, I think most gamers who read these come away with new insight to themselves and the gaming habits. /respect


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s