Universal Design for Learning is Not Learning Styles

Icons representing universal design for learning
Icons from https://www.flaticon.com

Recently, I had a conversation at a conference with an educator, who claimed universal design for learning is learning styles, so we should use learning styles in instruction or training.

No, they are not the same.

Universal design for learning (UDL) facilitates the achievement of learning goals for individuals with learning differences and capabilities [1], as it provides a blueprint and framework to create instructional goals, content, and assessment that can suit everyone [2].

UDL consists of three main blocks tailored to the instructional environment [3] that addresses learning differences [4]. These are multiple means of:

1) Presentation: using a variety of ways to present the content to help learners acquire knowledge [3], e.g. text to speech [5]

2) Action and expression: encouraging learners to demonstrate their learning in a variety of ways, such as writing an essay, recording an audio response or creating a video [2]

3) Engagement: using a range of practices and adjustable levels of challenge to enhance learners’ motivation [6], e.g. TheReadingbar software that contains a volume control slider [5]

Considering learning differences and the pace of learning in different individuals, implementing UDL could help learners who cannot keep up with their peers, or have some learning disabilities. UDL does not just provide accessibility, but it eliminates barriers so every learner can succeed [6]. Organizations and instructional designers could use UDL in their learning design process. Extensive research has proven that the use of UDL supports strategic learning and enhance learners’ learning experience [3, 7, 8].

As you see, UDL is different than learning styles, which has been debunked due to not having any supporting evidence. In contrast, research on UDL has been widely replicated and grounded in learning sciences, neuroscience, and cognitive science. It is deeply rooted in Zone of Proximal Development (ZPD), scaffolding, and modeling.

If you are interested, learn more about UDL here.



(1) Trostle Brand, S., Favazza, A., & Dalton, E. (2012). Universal design for learning: A blueprint for success for all learners. Kappa Delta Pi Record, 48(3), 134-139.

(2) Tobin, T.J. (2014). Increase online student retention with universal design for learning. Quarterly Review of Distance Education, 15(3), p13-24. 12 pp.

(3) Schelly, C.L, Davies, P.L. & Spooner, C.L. (2011). Student perceptions of faculty implementation of universal design for learning. Journal of Postsecondary Education and Disability, 24(1), 17-30. Retrieved from https://files.eric.ed.gov/fulltext/EJ941729.pdf

(4) CAST (2011). Universal design for learning guidelines.Retrieved from www.udlcenter.org

(5) Edyburn, D.L. (2005). Universal design for learning. Retrieved from http://www.ocali.org/up_doc/UDL_SETP7.pdf

(6) Novak, K. (2016). UDL now!: A teacher’s guide to applying universal design for learning in today’s classrooms. CAST Professional Publishing, 45, 237-238.

(7) Smith, F.G. (2012). Analyzing a Ccollege course that adheres to the universal design for learning (UDL) framework. Journal of the Scholarship of Teaching and Learning, 12(3), 31 – 61. Retrieved from https://files.eric.ed.gov/fulltext/EJ992116.pdf

(8) Spencer, J. &Whittaker, C.R. (2017). UDL A Blueprint for Learning Success. Educational Leadership, 74(7), 59-63.


A Brief History of Learning & How it All Started

History of human learning from Greek philosophers, to ecology, to EEG
Image source: pixabay.com

“History is for human self-knowledge. The only clue to what man can do is what man has done. The value of history, then, is that it teaches us what man has done and thus what man is.” ~R. G, Collingwood

Throughout centuries, humans have always wanted to learn about the world, and how we think and behave. The efforts of all the early philosophers, psychologists, and scientists have resulted in significant progress in how we learn. Let’s see what we learn from the history of learning.

The rise of epistemologies

Since man started to wonder how human beings think, perceive and recall information, we have seen ongoing advancements in the field of science and psychology. In The Story of Psychology, you learn how these advancements started by a number of Greek forerunners from the early 5thcentury [1]. These philosophers were interested to learn about human cognition and proposed some explanations of human mental processes that eventually led to Western psychology. For example, Alcmaeon’s theory of perception, despite being thoroughly incorrect, led to the beginning of epistemology. Alcmaeon believed humans derive ideas through the perceptions in their brains sent via air channels from their organs. Or, Democritus believed knowledge is constructed in human brain via interaction of images of atoms transmitted to it.

Later, Socrates developed an alternative perception-based theory, i.e., we learn not from experience but from reasoning, which helps us discover knowledge within us. His teachings influenced the development of psychology. His theory of innate knowledge later became part of the psychological theories of Plato and Kant, as well as current psychologists, linguists, and parapsychologists. While Plato had the notion of innate knowledge in common with Socrates, he opposed that perception is the source of knowledge, and argued that we learn through deductive reasoning and reflection.

Plato’s student, Aristotle, had a different view. He believed that we arrive at knowledge from inductive reasoning driven from observationand empirical data. This is what we now consider a fundamental aspect of scientific method that has characterized science ever since. Aristotle’s other theory (knowledge is retained through experience, especially with salient emotional impact) is still regarded an enduring epistemology among educational psychologists.

It can be seen that some of the theories of these early philosophers could have laid out the base for current learning theories and science. As an example, Aristotle eventually built on his predecessor’s theories to reach the conclusion that to arrive at general truths, our mind and body are involved to use the perceptions whether deductively or inductively. Aristotle’s conclusion and that of his successors such as Locke (he believed that complex ideas are constructed in the mind), hinted at what would later was called cognition.


Aristotle’s theory of the processes of inference makes him in advance of many modern psychologists [2]. These processes, which were later referred to as mental models, are built in our minds through inferences and propositional representations of what we see in the world. Therefore, to find meaning in the events around us we need mental models, which are our conceptual models of the way things work, how events take place, or the way people behave [3]. Psychologists such as Johnson-Laird and Baddeley saw the role that language can play in acquisition of concepts and human’s cognition. They found that memory traces can be registered and restored by verbal rehearsal [4].

Further, findings of other psychologists such as Atkinson and Shiffrin asserted that human memory has three components, i.e., sensory, short-term, and long-term [5]. According to their model of memory, information is processed using these three components: our senses detect environmental stimulus and store it briefly in the sensory memory (or sensory buffers). It is then transferred to short-term memory where information can be processed. Finally, by giving attention to it, information can be stored in the long-term memory and later retrieved.

Subsequently, Baddeley expanded on the Atkinson-Shiffrin model and developed a four-component working memory model: central executive, phonological loop, visual-spatial sketchpad, and episodic buffer. The central executive controls information flow and attention. The phonological loop stores verbal content. Visual-spatial sketchpad caters to visual-spatial data. Lastly, episodic buffer creates integrated units of information from all three domains in the working memory and links it to long-term memory. While the works of these cognitive scientists highlight the importance of human mind and cognition as the central points in knowledge acquisition, other psychologists saw the environment or human behaviorthe central elements of how we shape our knowledge.

Ecological approach & behaviorism

In the 1960s, ecological psychologists, Eleanor and James Gibson took a different approach to perception and argued that information is in the environment [6]. In other words, perception is a function of the environment and information is not stimulus. Moreover, James Gibson argued that knowledge is constructed via the interaction between humans and the environment in that the environment affords various actions to the humans. We will then use this information to build our schemata, use for our daily activities, or navigate in the world [7]. We process the perceived information (or as Gibson refers to as affordances), and then our thinking processes happen. According to Gibson, one learns by being an active observer and constantly move his body or eyes relative to the environment [7].

Clark and Chalmers took a step further and proposed that the environment plays an active role in driving cognitive processes. They suggested that the mind or cognitive processes extend into the world via language. They viewed the language as a complement to our inner minds in order to spread them into the world. Without language, as they noted, we will remain as “discreet Cartesian inner minds, in which high-level cognition relies on internal resources” [8].

Contrary to ecological psychologists who stressed the importance of the environment, behaviorists believed that stimuli in the environment influence our behaviors, i.e., behaviors are learned through interaction with the environment. One of the prominent psychologists in this field, Skinner expanded on the notion of stimulus-response in which human responds to variables and stimuli in the environment. He argued that we tend tomanipulate the variables around us in order to “generate new ideas”where there is no specific problem [9]. The same applies to problem-solving and progress in discovery or inventions. He viewed a solution as a response which changes the situation to make the problem disappear. Unlike the ecological psychologists who viewed information as a function of the environment, Skinner believed that behavior is a function of the environment.


We have seen philosophers progress from metaphysical conjectures about the mind to a quasi-scientific understanding of its processes, and finally, establish psychology as an independent science [1]. While Socratic reasoning can help us discover contradictions in our belief systems and draw new conclusions, it’s not sufficient to discover new facts. So, scientific method has been employed since the medieval period, eventually leading to scientific revolution in the 16thand 17thcentury.

People in the medieval time believed authorities could give them all the answers and doubting them was irreligious; but Francis Bacon claimed that people should examine the nature themselves and find out whether the discovered laws were true [9]. Around a century later during the Enlightenment movement (in the 17thcentury), the Enlightenment thinkers believed that we learn by reasoning and science only [1, 10].

Since then we have seen a progressive narrative of the history of science. Kuhn highlighted that a significant breakthrough in science is first a break with old ways of thinking and old paradigms [11]. Paradigm shifts, as he called, happen after one recognizes the anomalies and questions the existing norms. Kuhn made a distinction between normal science—incremental changes built on existing theories—and revolutionary science or paradigm shift, which is a transformation from one paradigm to a different one. However, both the normal science (or what he refers to as puzzle-solving) and revolutionary science are essential in understanding the world.

In conclusion, from the early fifth century where philosophers devised their prominent theories based on speculations to scientists who only relied on facts and evidence, we have seen significant changes in how humans construct, retain, and spread knowledge. By a closer look at these evolutions, we can admit that modern science goes back to the Greek philosophers who introduced new paradigms built on their predecessors.

Lesson learned

As you saw, the Greek philosophers kept building on each other’s theories, even the absurd ones. The inaccurate conjectures gave them an idea. So, don’t quickly dismiss ideas that you find wrong or don’t agree with. Those might help you generate new ideas!



[1] Hunt, M. (1993). The story of psychology. 1stEd. Doubleday Inc. New York, NY 10036.

[2] Johnson-Laird, P. N. (1980). Mental Models
in Cognitive Science. Cognitive Science, 4, 71-115.

[3] Norman, D. (2013). The design of everyday things. Basic Books. New York, NY 10107.

[4] Baddeley, A. (2009). Working memory. Current Biology, 20(4).

[5] Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In Spence, K.W.; Spence, J.T. The Psychology of Learning and Motivation, Vol 2. New York: Academic Press. pp. 89–195.

[6] Mace, W. M. (1977). James J. Gibson’s strategy for perceiving: Ask not what’s inside your head, but what your head’s inside of. In R. Shaw and J. Bransford (Eds.) Perceiving, Acting, and Knowing. LEA, Hillsdale, New Jersey. Pp. 43-55.

[7] Goldstein, E. B. (1981). The ecology of J. J. Gibson’s perception.Leonardo, 14(3), 191-195.

[8] Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.

[9] Skinner, B. F. (2014). Science and Human Behavior. B.F. Skinner Foundation.

[10] Willingham, D. T. (2012). When can you trust the experts?San Francisco, CA: Jossey-Bass.

[11] Kuhn, T. S. (2012). The structure of scientific revolutions. The University of Chicago Press, Chicago 60637. 4thEd.


Diffusion: L&D Innovations

What makes a change slow or fast in some organizations?

By change, I’m referring to adopting a new strategy, working model, software, and best practices in learning and development. My focus here is on the role of L&D in these changes.

Recently, I read Everett Rogers’ book, Diffusion of Innovations, and it occurred to me that diffusion of innovations requires some principles that apply not only to marketers, but also to learning and development professionals. After all, learning professionals have to, at some point in their career, convince their stakeholders of doing things differently to achieve better results.

The rule of thumb, repeatedly stressed in the book , is learning about the culture and customs of a place before promoting a new idea there. No matter what something seems a brilliant idea to us, it might seem completely pointless to others somewhere else. That’s what L&D professionals who play the role of change agents in companies should consider as well.

Continue reading Diffusion: L&D Innovations

When can you trust the experts?

We have all been, in one way or another, persuaded by advertisers to buy something, or induced by salesmen to buy their revolutionary software, or even convinced by researchers that a model or tool is effective. Many of these with none or limited evidence. We are all aware that advertisements include meticulously chosen subliminal effects to influence our decisions. In other words, you are made believe in what ‘experts’ want you to believe in, and you are surrounded by these peripheral persuasions at home, at work, or on the streets. The question is when you can trust them.

In his book, When can you trust the experts, Willingham contends that we should take a more scientific stand toward the things that would affect us, and he offers to distinguish between good science and bad science. After all, with the hype of social media, we are exposed to a large amount of information which could influence our way of thinking and working.

In the first chapter of the book, why smart people believe dumb things, Willingham lays out how an individual behaves or thinks in certain ways because of unconscious persuasion of ads, propaganda, and even social interactions with others. He highlights that we tend to believe things that others believe and this social proof makes the persuasive messages more credible.

Moreover, many of these persuaders use the term “research-based” to persuade us. But, do these “empirical” research offer warrants for the credibility of their claims? Are all the papers published in peer-reviewed journals reliable? It is a known fact that many studies, particularly in health care and education, make sweeping claims which are influenced by conflicts of interest. Unfortunately, the problem is many of these unwarranted studies become widely accepted: for example, “learning styles” or “twenty-first-century skills” in education and schooling. Willingham remarks:

Education researchers have never united as a filed to agree on methods or practices that have sound scientific backing.

So he invites us to tell between good science and bad science. He highlights that in order to protect ourselves from believing false claims, we need to be aware of the peripheral cues in the persuasive messages, so we can discount them, and be aware of our own beliefs, which might bias how we evaluate new information. Besides, he recommends four steps to identify false claims and make the right decision, so that you’ll become more analytical and critical of what you are offered by persuaders around you. These are:

  1. Strip it and Flip it: To strip a claim you should use this sentence “If I do X, then there is a Y percent chance that Z will happen.” X is the persuasive message, Y is the value, and Z is the desired outcome. For example, “if we use online training instead of traditional training, there is a 50 percent chance that our employees’ productivity will increase.”Then flip the outcome. In this technique, flip the outcome or what you are to do. In the above example, the flipped message will be “if we use online training instead of traditional training, there is a 50 percent chance that our employees’ productivity will decrease.” Sometimes, you might have to flip both the persuasive message and outcome to analyze better.
  2. Trace it: To ensure if a claim is scientifically supported, you shouldn’t rely on credentials only. While someone’s status as a professor may indicate that his/her work has scholarly integrity, it doesn’t necessarily signify that the person applied scientific methods in evaluating the recommended change. Moreover, sometimes ‘experts’ or, in this context, persuaders misunderstand other researchers’ claims.
  3. Analyze it: Do not generalize a change based on your experience. Willingham’s main message of his book is “You can’t trust your own experience. You need scientific proof!”
  4. Should I do it: Using the previous three steps, you should ask yourself if  sufficient evidence is provided for the change that the persuader claims. Then consider all the factors that involve the change, and then make a decision.

I don’t personally find it feasible to apply all these steps, but if you are making a decision that involves your health or a substantial amount of money for your company, you might want to consider all of them. When can you trust the experts makes you somewhat more analytical of what you read and hear which concerns your health, work, and education.

Reference: Willingham, D. T. (2012). When can you trust the experts? San Francisco, CA: Jossey-Bass.

Situational Leadership & Workplace Learning

I have shared in multiple posts that leadership in organizations plays an important role in employees’ performance support including their learning. I recently attended a Workshop on situational leadership created by Ken Blanchard. I’m sharing my key takeaway from the workshop as well as my reflections on how situational leadership can relate to workplace learning. Continue reading Situational Leadership & Workplace Learning