They provide proof-of-concept data for the treatment of apathy which is increasingly recognized to be a key component of several neurological disorders (Bonelli and Cummings, 2008;
Marin, 1991; Chow et al., 2009; Starkstein, 2009). Unlike other tasks involving risk, such as the Iowa Gambling Task (Bechara et al., 1994) or the Cambridge Gamble Task (Clark et al., 2004), our TLT requires participants to take risks by making anticipatory responses. Many other paradigms place certain and risky options on an equal footing with the same amount of effort required for both choices. This has the benefit of establishing risk preferences independently of effort but tends to favour a careful, deliberative response strategy. The traffic lights paradigm imposes time constraints on decisions
and rewards behaviour that might be considered ‘functionally JQ1 chemical structure impulsive’ (Dickman, 1990): on this task, it can be functionally useful to make anticipatory responses because these can lead to greater rewards, analogous to many situations in real life. It is possible that KD’s Lumacaftor lack of anticipatory responses on this task reflects risk aversion, rather than lack of motivation or unwillingness to make an effort for rewards. However, it is less easy to explain how such a mechanism might account for behaviour on the directional saccadic task, where there was no risk of incurring a penalty. How did dopamine reverse apathy and reward insensitivity? Substantial evidence links dopamine to reinforcement learning (Schultz, 2007). However a growing body of research also implicates dopamine in effort-based decision-making, generating the motivation and vigour to overcome costs of initiating actions (Niv et al., 2007; Kurniawan et al., 2011). The progressive improvement of KD’s performance on the TLT immediately post l-dopa (Fig. 6B) is suggestive of dopaminergic enhancement of learning. However, during the drug holiday period such learning was radically reversed (Fig. 6C), suggesting that if this effect
Forskolin was solely due to a reinforcement learning effect of l-dopa it had not been completely consolidated. Dopamine was still required to maintain it. On the directional reward-sensitivity task, l-dopa also had a dramatic effect after its introduction, speeding saccades to the RS (Fig. 7). During the drug holiday, however, there was no longer any significant reward-sensitivity but saccades were generally faster than before treatment, suggesting there were some general, non-specific effects of practice on the task. The time course of action on reward-sensitivity and its reversal during the drug holiday makes it unlikely that dopaminergic effects on synaptic plasticity and learning were the only mechanism of action. Instead, it might also have had an effect on response vigour or overcoming costs of effort (Niv et al., 2007; Kurniawan et al., 2011).