Principles of engagement in next generation instructional software design
initiated by Roy Clariana (February 2008) comment as you like

This report was funded by CompassLearning contract #20874

Educational software developers (and users) live in interesting times. Several streams have come together to drive significant change including improved and less expensive computers including tablets and cellphones, increased internet bandwidth with nearly ubiquitous access (e.g. 4g), new ways of interfacing with computers (e.g., muli-touch, accelerometers, biosensors), and most critically, learners who are probably different than their parents both in their cognitive skill set as well as their expectations. This paper begins with one description of today’s learners, then provides an overview of the salient features and principles reported in the scientific literatures (e.g., instructional TV and gaming research) regarding the influence of both student engagement on learning and retention, and also brain research in education practice as these relate to the design of instructional software, connecting these principles to the existing CompassLearning Odyssey product line and JellyVision development projects. Finally, a futures section anticipates product development perhaps a few years from now.

Students Have Changed

Several popular writers have suggested that the current generation of students is very different than previous generations. From birth till age 22, today’s students in the US on average spend 10,000 hours playing videogames, send 200,000 emails and instant messages, talk 10,000 hours on digital cell phones, watch 20,000 hours of television (especially fast paced shows watched by teens such as MTV), watch 500,000 commercials, but spend only 5,000 hours reading books. Probably the most important principle from brain research is the dynamic nature of the brain. Bruce Berry of the Baylor College of Medicine says “Different kinds of experiences lead to different brain structures.” (Prensky, 2001, p.1). As discussed later in this paper, reading begets reading; but the numbers indicate that the typical student doesn’t read much.

Don Tapscot (1998) believes that the current net-generation (net-geners) have different expectations of technology-based content and material, different ways to communicate and collaborate, different attitudes toward technology, and different ways of participating in the networked community all of which profoundly affects their expectations regarding the process of teaching and learning. Similarly, Prensky (2005) refers to today’s students as “digital natives” who communicate (instant messaging), share (blogs), shop (eBay), exchange (peer-to-peer technology), meet (3D worlds), collect (downloads), coordinate (wikis), evaluate (reputation systems), search (Google), analyze (SETI), report (camera phones), broadcast (podcasts), program (modeling), socialize (chat rooms), and even intentionally learn (Web research). This mass of experiences affects metacognitive learning style, cognitive skills and processes, as well as expectations that all influence personal goal setting, self-efficacy, and motivation.

Working with electronic media can also alter metacognition. Consider learning style, "the characteristic behaviors of learners that serve as relatively stable indicators of how they perceive, interact with, and respond to the learning environment" (Keefe, 1979). As an example, Kolb's (1976) Learning Style model describes two bipolar dimensions, abstract conceptualization versus concrete experience, and reflective observation versus active experimentation. In a study involving middle school, high school and adult learners (Clariana, 1997), intensive experience with instructional software resulted in a pre-to-post learning style shift towards active experimentation and concrete experience, with an increase in instructional risk-taking and a tendency to push on or forge ahead. The shift was greater for higher-ability students, and the more time spent with the instructional software, the greater the shift.

Cognitive skills are also strongly altered by extensive screen-based experiences. In terms of fine motor coordination, it is a big deal when students are able to transition from the giant pencil to a regular size pencil, or are able to skip on one foot. But learning to read requires monumentally greater fine-motor coordination (for near-point visual acuity) and top-down subconscious mental control (it seems amazing that anyone can learn to read at all).

Now consider this, the way people read web screens is totally different than reading print-based text (for many reasons). This is clearly demonstrated by eye-tracking research comparing print versus web-based reading. When reading print-based text, eye-tracking research has shown that readers fixate for a moment (250ms) on each succeeding significant word. The technical term is saccade, and refers to precise and very rapid jumping from word to word with concomitant focusing on an area about the size of an average word. A fixation window is 7-9 spaces long in young readers and up to 20 spaces long in highly skilled readers, the window is invariant of the actual target word size. Beginning readers fixate and sound out every word, while competent readers do not fixate every word. Research has shown that on average, content words are fixated 85% of the time, function words 35% of the time (Carpenter & Just, 1983), with 2-3 letter words being skipped 75% or the time, but 8 letter words are fixated almost always (Rayner & McConkie, 1976). Readers typically move their eyes forward when reading, but 10-15% of saccades move backward, fixating words (look back probably relates to the text difficulty or coherence). In competent readers, saccades and fixations are fast (about 170 per minute) and fixations account for about 95% of reading time (Rayner, 1978)

However, when reading web screens, people view different areas of the screen and pay attention to very different kinds of prompts and signals (Nielsen, 2000) relative to reading print. They often begin with a left to right strategy and then consistently shift to a center then left then right strategy (Schroeder, 1998). Not only the spatial trajectory but also the size of the saccade changes, becoming larger. The terms ‘foraging’ or ‘browsing’ describes this behavior better than the term ‘reading’. Considering that most children browse electronic media about 5 times more than they read print, some writers suggest that the web has altered reading for better or worse and perhaps forever (Lamb, 2005). web eye-tracking video

At the same time, there is a renaissance of good quality books for children and adolescents now. However, though there are currently many print readers, many children don’t read books for fun. So in our classrooms there are pockets of “readers” and big bunches of “browsers”, mixed in with some children who don’t (or can’t) read at all (for various reasons), and the gap between those who read books and other print for pleasure and those who don’t is increasing every year.

Massive experiences with electronic media also affect expectations, especially related to effort versus entertainment, and also how good the production quality must be. Salomon (1984) showed that US students invest more mental effort when reading and little effort when watching educational television in school, while Israeli students tried as much in both conditions. He suggests that US students have been conditioned that TV is entertainment and bring that attitude to the classroom. Similarly, today’s students may see all electronic media as entertainment and bring that expectation to the classroom. For most, electronic media equals communication (conversation), and so are likely to bring that expectation and style to instructional software lessons.

In addition, students have become accustomed to high production values. Both TV and increasingly, the web, provide high quality images and sound. These experiences have set their expectations high, software with lower production value will be less well accepted. Fortunately for my HD home videos, has lowered video production values worldwide, at least for a few years, but users are less forgiving of commercial software.

And by the way, these changes in processing and expectations have a push back effect on print-based text and even television. Compare a textbook from 1950 to one from today. Textbooks for generations were linear and word based, with few images and very dense text. Today’s textbooks from pre-Kindergarten through Calculus are a collage (first mass exposure in Seseme Street and then later in MTV, Hankins, 2005), with a thin column of text with lots of headings and subheadings, with multiple pictures and charts on every page plus dialog boxes with enrichment, hooks, and backstory (e.g., Madam Curie died of radiation poisoning or that Miles Keogh’s horse named Comanche was the only survivor of Custer’s last stand). Television has experienced this same collage phenomenon. All TV news now has a main area to the right with the narrator in front of an active background graphic, a static text area on the left and another on the bottom with logo, current time and temperature, the DOW, NAS and S&P ticker, and below that is a scrolling banner with a totally different set of news headlines. If you prefer this look, your mental processes and expectations have shifted towards multi-tasking. If you have a teenager, you most likely have seen them do their homework (on paper) while talking on the cell phone, watching TV, and instant messaging three different friends at the same time. McLuhan (1962) commented that "First we build the tools, then they build us!"

So is it a surprise that today’s students don’t read much, or at least not in the traditional sense using the kinds of reading materials that we adults read as children. And reading begets reading. This presents an extraordinary problem for instructional software designers and teachers related to disengagement. An individual student’s decision to persist in an instructional lesson is related to his goal (intrinsic and extrinsic) and his belief that he can accomplish the goal (this is refereed to as self-efficacy). Students who don’t like to read a one channel linear format or are not good readers (or both) have low self-efficacy for tasks involving reading and quickly disengage from an instructional software lesson (or any task) that involves this kind of reading, unless the task has a high payoff or is compelling enough to keep the student working. Thus instructional software screen design layout may need to change to collage to accommodate net-geners.
Netgen Software Design
Most instructional software today involves traditional linear one channel reading because all of the research supports the “keep it simple” principle. Most instructional designers would be uncomfortable adding more than one or two graphics to a screen and would be aghast at adding extraneous graphics or information such as a backstory. Screen real is precious and extraneous material detracts from the lesson objectives.

Comprehensive instructional software, such as CompassLearning’s software, has an additional problem due to the nature and scope of the content that must be addressed. To accomplish comprehensive curricular coverage, designers create several generic templates that incorporate research-based strategies for different learning outcomes, and then add different content over-and-over to the templates to make separate lessons. For example, consider a software lesson on consonant blends (or diagraph) that incorporates two or three proven strategies for this type of learning all in the same lesson. The lesson is highly interactive and motivational with exceptionally good production quality. A student uses it, learns the skill, and enjoys it. Great! But there are a lot of sounds and words in the English language, so now repeat 50 times to “cover” the most common blends incorporated into 4 or 5 templates. No matter how good the strategy or motivating the lesson, somewhere around the 30th lesson, even Job might say this is enough.

An alternative approach would be to custom design every lesson. For example, “this lesson is brought to you by the letter C”, ala Sesame Street. Such software design would be similar to creating TV episodes. But it is prohibitively expensive to customize every lesson based on its specific content and also a limited number of proven strategies for various learning outcomes. So how do you influence students to persist? Designers must build in engaging elements and motivational components, with plenty of variety.

“Gameshow” is a program concept that can be used over and over to teach different content while providing components of engagement including a story line, recurring and new characters involved in conversations, xxx etc.. Cameron (2004) compared the effects of adding a game component to instruction. Undergraduate students (n=422) completed a computer-based lesson on the circulatory system. Compared to the control group, those who received the game spent more time in the lesson, scored significantly higher on the four posttests, with an average total effect size of 0.75 (a large effect), and had significantly higher measures of perceived motivation. You don’t know Jack

Characters in the form of sock puppets or purely digital actors can replace videos of people, thus reducing the cost of development by a factor of 10 or more relative to using video of actors. Report the results of the recent etr&d paper here.

Recently, Dickey (2005) looked at the work of Jones, Valdez, Norakowski, and Rasmussen (1994), Schlechty (1997) and others and identified several important elements of engaged learning: focused goals, challenging complex sustained tasks, clear and compelling standards of success, protection from adverse consequences for initial failures, affirmation of performance, affiliation with others, novelty and variety, choice, and authenticity (p.70, Dickey, 2005). Dickey (2005) feels that software game designers are pioneers in interactive design and his review of the software gaming literature adds the following critical elements of engagement: learners’ point-of-view (POV) should be inside the lesson involving the learner as a participant, narrative (a story) should drive the lesson, and interactive design must include setting, roles, and onscreen characters. These elements are similar to those described by Beck and Wade (2004).

For example, our field observations of students using CompassLearning Odyssey software, we have observed that the these lessons do a good job in terms of focused goals, clear and compelling standards of success, protection from adverse consequences for initial failures, and affirmation of performance. Also individual lessons initially have novelty and variety, but inevitably in template-based design, novelty wears off, although, mixing different kinds of related lessons in a sequence helps maintain some variety. However, authenticity and challenging complex sustained tasks are seldom built in to the lessons. Choice, or at least the perception of choice, such as true within-lesson branching depending on the learner’s preference, is only occasionally built in. I imagine that existing lessons that use branching are a big hit with students. Finally, affiliation with others is typically not built in to Odyssey lessons, nor are participant POV, narrative, or setting, roles, and characters. (run the by Kelly to see f there are lessons with these elements)

The Role of the Teacher

It is unlikely that instructional software will ever be “teacher proof”, and it should not be. Our experience with comprehensive instructional software from companies such as CompassLearning is that traditionally, designers have addressed many of these elements of engagement discussed earlier, but also depend on teachers to address engagement with the software lessons. All of these elements of engagement can be fully manifested if the software is properly implemented. For example, printing out and reviewing performance reports with each student is an affirmation of performance. Clariana (1993) has shown that reviewing progress reports improves both attendance (motivation) and achievement for at-risk students attending an after school remedial program. In fact, this may be the most powerful influence on instructional software engagement, that someone you value (the teacher, a parent, peers) cares that you are doing it. Not reviewing the reports of student lesson progress is the same as saying “this isn’t important.” How students engage software lessons is strongly influenced by the way the teachers setup and use the lessons as a component of their total instruction.

The next generation of instructional software must continue to include past proven instructional strategies but must also incorporate more of these elements of engagement and also the look-and-feel of mainstream electronic media (e.g., a collage format). Lessons currently being designed and developed by JellyVision for CompassLearning focus especially on POV, narrative, and the interaction of setting, roles and characters. An approach called interactive conversation interface (ici) maintains pacing, creates an illusion of awareness, and maintains an illusion of awareness.


Maintaining a quick pace increases learner interaction. For example, even in early so-called ‘twitch’ games such as Pacman, the quick pace keeps the players attention. For the learner, time seems to “flow” by, and this flow phenomenon presents a paradox for modern theories in that learners paid attention to the game even though the game did not use most of the characteristics that Dickey (2005) would consider to be most powerful including challenging complex sustained tasks, affiliation with others, novelty and variety, choice, and authenticity.

Pace not only influences attention but also cognition, especially those processes related to reading comprehension. de Lopez (1993) notes that increasing reading speed (1) helps students break the habit of translating word for word (2) increases the reader’s confidence by demonstrating that he can comprehend a great deal from a text without understanding every word, (3) encourages students to change reading strategies, to utilize previous knowledge more efficiently and depend less on the printed text, (4) helps increase concentration, since the reader’s mind will be more actively processing and integrating the information, and (5) promotes reading for ideas and concepts rather than deciphering letters and words.


The iCi (click: iCi) approach creates and maintains and illusion of awareness that keeps learners on task. Children like to anthropomorphize and are willing to suspend disbelief to foster a conversation. For example, one of our student teachers created a puppet named ‘Betsy’ for her kindergarten students (Mary, 2005). In less than a week, half of the parents were talking about Betsy, and many of the parents were not aware that Betsy was a puppet rather than a person. Probably the conversation was something like, “Well, what did you do in kindergarten today?” “I played with Betsy. We practiced counting to 10 and then sang a song.”

[For example, describe two ici lesson strategies here…]


When you add characters and settings to instructional software, you set up the possibility of synergy. Imagine for a moment that you are a third grade teacher and your district has just provided the best designed and most popular instructional software for you to implement on the four computers in the back of your room. This software has characters that the children see on TV, in fact the children have lunch boxes and tennis shoes with the names and images of these characters. When the students login, they read and hear stories from the characters point of view, go on virtual adventures and have conversations and discussions with the characters in real time, and they talk about the activity on the playground, even joining online fan clubs and reading blogs about the characters at home and watching their TV cartoon on Saturday morning before soccer. In other words, software companies will not only adopt the episode design approach and principles of TV but also the audience and marketing methods will follow.

Brain Research and Learning

The years 1990-1999 was declared by the US Congress as the decade of the brain. Substantial grant funds were directed towards understanding how the brain works, especially through neural imaging. The results of these past and ongoing studies provide research-based evidence of how the brain works, but often, it takes time for this type of elemental knowledge to be established thorough theory development and peer review and replication. Further there is not a direct line between brain-research based facts and educational application. Functional brain imaging may show where neural activity occurs for intellectual tasks, such as reading, but not what or why. Berninger and Richards (2002) say, “We also emphasize that the fields of neuroscience and educational neuropsychology are not yet to the place where we can go directly from brain scan to lesson plan.” (p. 16) They go on to say that much of the current “brain-based” educational applications are not based on the most recent research and in addition, the effectiveness of most so-called “brain-based” classroom approaches has not been validated. So now there is significant and substantial cross-discipline work to be done to translate the fine grained, complex, elemental level brain imaging research of structural and functional brain areas and interactions into educational applications.

For example, what does brain research say about learning to read. First, reading is not a “natural” human ability; there is not a basic built-in reading system in the brain. The brain creates a functional reading system by appropriating processing capacity from existing systems that serve other purposes. These subsystems must be reprogrammed from their original functions by extensive association and also these new subsystems must be interconnected. For example, one initial subsystem is on orthographic processor that can isolate words from the volumes of incoming visual information and preprocess it into sub-lexical (letters, phonemes) and lexical (words) components to pass on for further processing. Neural imaging indicates that the orthographic processor has three constituent areas in the brain that progressively handles different aspects of orthographic processing. [possible describe the three areas in greater detail]. As a caveat, it is critical to remember that most imaging is done with adults, not children. Research has shown that there is a shift in reading subsystem pathways with maturation characterized by automatic word recognition and the ability to read silently.

How can this information about the functional brain structures comprising the developing orthographic processor inform reading instruction? Just as an athlete may work in the gym to strengthen a specific muscle group that is specific to her sport, or an injured athlete undergoes physical therapy directed at their injury, instructional activity can focus on development of these subsystems both for typical beginning readers as well as for any readers with various processing deficits. However, because of the need to interconnect subsystems, it is critical that subsystem development be done as part of or in conjunction with activity that interconnects subsystems. Thus brain research (Berninger & Richards, 2002) supports combining phonics (programming the orthographic processor) and whole language (interconnecting the developing subsystems and also for maintaining attention & motivation). This approach is consistent with the findings of the National Reading Panel (2000) that showed that beginning reading instruction should include linguistic awareness, word recognition with phonological decoding, reading fluency, and comprehension.

As one clear example, the initial processing of the orthographic processor responds to the alphabetic letters (e.g., A, B, C) the same as it does to frequent phonemic blends (e.g., sh, ch). Considering this along with research-proven instructional practice, phonics instruction can be used to develop this subsystem of the orthographic processor with the following caveats: the corpus of sub-lexical and lexical components used in these lessons must include the most frequent age appropriate spelling-phoneme correspondences (p. 154).

In more general terms, Caine and Caine (1994) list 12 principles of brain-based learning: (1) The brain is a complex adaptive system that functions on many levels simultaneously. (2) The brain is a social brain. (3) The search for meaning is innate, we need to make sense of our experiences. (4) The search for meaning occurs through" patterning" (familiarity). (5) Emotions are critical to patterning. An appropriate emotional climate is indispensable to learning. (6) Every brain simultaneously perceives and creates parts and wholes. (7) Learning involves both focused attention and peripheral perception. (8) Learning always involves conscious and unconscious processes. Much of our learning is unconscious in that experience and sensory input is processed below the level of awareness. (9) We have at least two ways of organizing memory (semantic and episodic). (10) Learning is developmental. The brain is "plastic" with no limit to growth and to the capacities of humans to learn more. Neurons continue to be capable of making new connections throughout life. (11) Complex learning is enhanced by challenge and inhibited by threat. (12) Every brain is uniquely organized, and so choice is good (multiple intelligences, male and female, learning styles, differing talents and intelligences and so on).

[Relate each of these above to Cl lessons and the new ici lessons…. ]


Becker (2000) said, “It is not too much of an exaggeration to say that American adolescents live in a world defined by their age-peers and that they visit the alien adult world during their time in teachers’ classrooms.” (p.13). Eventually, next-gen students will be the teachers and administrators who make hardware and software purchase decisions. As surely as you can’t sell black-and-white software in a color monitor world, software must almost certainly move towards collage and communication. But at this moment, we cannot throw the baby out with the bath water. The transition phase should include both traditional and new generation lessons. Mixing the two within a lesson sequence allows for breadth of content and variety that tends to maintain engagement.

I’m bored.” You’ve heard it a thousand times (today) if you are a parent (or teacher). But now, you’ve had it. You pull the car over, get out, and open the back door. Then you say, “OK, you drive.” It’s the same ‘boring’ scenery out the window, the same ‘boring’ route to grandma’s house, it’s the same ‘boring’ people inside the car, but now it’s different. Bart went from bored in the back seat to engaged in the front seat in just 5.2 seconds (this is cool, this is fun, this is real). If you say, “Slow down ahead, this curve is a bit tricky”, he listens, learns, and responds as though you were a rock star. If you say, “Turn down the radio a bit, it’s just safe driving.” A millisecond later, the radio is quiet, and without any eye-rolling or muttered comments of “You’re so lame.” So, what makes this particular task engaging, and more to the point, what makes instructional software engaging? Recall this scenario when you need to describe what makes software engaging. The software must provide a real context and interaction where the learner is in control of something that matters to her.

We bought two Ferbies this Christmas for the grand kids for $15 apiece (Ferbies were “THE TOY” several years ago). A Ferbie is a sort of teddy bear that talks, but not like pulling a string on Barbie. Rather, it asks questions, such as “Do you like me?” and responds differently for yes or no replies. Now you must keep the book, because Ferbies also have their own different language, and you can more easily communicate with them using the book. Ferbies also talk to each other. It’s not hard to imagine children at school and home in 5 or 10 years sitting in the corner learning consonant-vowel blends from a next-generation Ferbie that is wirelessly connected to the classroom router. In fact, if you had a bunch of Ferbies, and the right software, they could do a puppet show with each other and with the children. Future "Ferbies" might play with each other all night once the children leave.

More and more DVD movies allow users to choose alternate paths at several critical points through the story narrative, or example, the animated movies The Abominable Snowman, The Lost Jewels, and Mystery of the Maya. Given the decreased cost of heads up or other display techology, you could watch the movie Matrix 7 in 2012 in your living room with your significant other, but you're each watching a different plot progression and ending for the same movie perhaps based on immediate biomeasures and your multi-dimensional past preference pattern. In the classroom, your students will walk around with bluetooth-like earphones with builtin headsup video display interacting with the real context with a virtual overlay. Everything in your classroom will be radio tagged and info linked, everything will have mutli-layers of meaning. The collage will move from the current 2-dimensional form to 3-dimensional with other dimensional tags; other students both present and remote will be present as avatars in their visual/real world. The device will also record video and audio that can be stored, indexed, and replayed for ever. Given enough audio/video, AI software will create individual semantic space that reflects the childs knoweldge base, probably able to respond as the child would respond; perhaps as a true Doppelgänger.

Software lessons with artificial intelligence will also be an important part of future instruction. Describe data mining software designed to support writing and automatically scoring essays (for example, the VantageLearning software.

Eye tracking research indicates that eye saccade plays an important role in conversation. People use the timing of mutual gaze to coordinate dialogue (Bavelas & Chovil, 2000; Bavelas, Coates, & Johnson, 2002). The other person’s glance signals conversational turn taking. Thus eye movements are part of the composite signal used in the collaborative act of conversation (Clark, 1996). Online characters should use looking away and then fixation back on the learner to cue the learner that it is their turn to respond.

In the further future, computer hardware and software could monitor students’ speech and eye track when the student is verbally responding to the computer, and use it as a cue that the student is done talking. Also, Stampe and Reingold (1995) have shown that it is easy to train naïve users to control a computer with their gaze, for example, for use by quadriplegics. Gaze is faster than a mouse, but human-computer-interface researchers have not come up with an intuitive gaze-based solution for ‘clicking’. Blinks are unnatural for users, and dwell time is a natural solution. If the dwell time is set too short, everything they look at is selected. However, if dwell time is individually calibrated to be just longer than a normal fixation, it works and users report an eerie sense that the system knows what they want it to do before they consciously act (Jacob, 1991). As gaze tracking hardware and software evolve and come down in price, new ways of interacting with instruction will become available.

Biosensors, such as endodermal response (Clariana, 1992) and low-cost neural sensors ( ) provide new ways to interact with computers. As individuals develop new kinds of phisological/psychological response patterns through extensive experience with these sensors, it may amount to a new way of experiencing the world.

The expansion of internet bandwidth and sophisticated communications devices will bring telepresence to education, and this has many potential delta effects. For example, parents and grandparents can “tune-in” to show and tell form home or work, fundamentally changing the internal dynamics of the classroom as well as making classroom activities into a mediated experience and the student as actors with roles. The list can go on and on: parent conference, student sick at home, telepresence of home schooled children (who could be a virtual members of several different first grade classes, rather than one or none), regularly scheduled remote guest lecture (broadcast to multiple classrooms), teacher as celebrity, and so on. A company such as Weekly Reader could be part of telepresence in the classroom, complementing their print magazine, much the way that textbook publishers provide supplemental websites.

Regarding collage and the current look and feel of TV news, software lessons could incorporate a live running ticker along the bottom with timely information relevant to students. Students (and teachers) might logon to the software just to catch up on the live feed. Also, because the software is online, the possibility of real time massive user interaction in games or lessons is a potential audience draw. Schools could field “players” who compete at set times with players from other schools at the same grade levels, ala spelling bees or College bowl. Science and social studies lessons could be designed where students in one school work with students in other schools to collect and report data and stories. Somewhat like weekly reader, one shot lesson episodes could be written that tie in current topics, for example a Mars landing, and such lessons are distributed as a miniseries. Such a series could employ the characters from the regular lessons as on the spot field reporters, thus building the interest and tie-in to the main curriculum.
In terms of screen design,
Rather the point is that, in thinking of what “Mathland” might mean, the room, and not the computer screen, is the most tasteful and productive grain size of design for educational technology. That is: as educational technologists, we should try to imagine what the child’s room (or maybe the classroom) might look like, not merely what sort of interface is provided on a computer screen. (

Need a big finish here. collaboration? communities of practice?

This report was funded by CL contract #20874

Bavelas, J. B., & Chovil, N. (2000). Visible acts of meaning - An integrated message model of language in face-to-face dialogue. Journal of Language and Social Psychology, 19(2), 163-194.
Bavelas, J. B., Coates, L., & Johnson, T. (2002). Listener responses as a collaborative process: The role of gaze. Journal of Communication, 52(3), 566-580.
Beck, J. & Wade, M. (2004). Got game: How the gamer generation is reshaping business forever. Boston, MA: Harvard Business School Press.
Berninger, V.W., & Richards, T. L. (2002). Brain literacy for educators and psychologists. New York, NY: Academic Press.
Caine, R. & Caine, G. (1994) Making Connections: Teaching and the Human Brain. Boston, MA: Addison-Wesley.
Cameron, Brian H. (2004). The effect of gaming, cognitive style and feedback type in facilitating delayed achievement of different learning objectives. A doctoral dissertation at Pennsylvania State University.
Carpenter, P. A., & Just, M. A. (1983). What your eyes do while your mind is reading. In K. Rayner (Ed.), Eye movements in reading: Perceptual and language processes (pp. 275-307). New York: Academic Press.

Carrier, C. & Sales, G. (1987). The effect of learning style and type of feedback in computer-based instruction. A paper presented at the annual meeting of the American Educational Research Association. April 198 7, Washington DC.
Clariana, R. B. (1997). Colloquium: Considering learning style in computer-assisted learning. British Journal of Educational Technology, 28, 66-68.
Clariana, R. B. (1993). The motivational effect of advisement on attendance and achievement in computer-based instruction. Journal of Computer-Based Instruction, 20 (2), 47-51.
Clariana, R. B. (1992). Media Research with a Galvanic Skin Response Biosensor. Conference presentation. see:

Clark, H. H. (1996). Using language. Cambridge, UK: Cambridge University Press.

de Lopez, C.C. (1993). Developing reading speed. Forum, 31 (2), 50-55. Downloaded January 7, 2006 from
Dickey, M.D. (2005). Engaging by design: How engagement strategies in popular computer and video games can inform instructional design. Educational Technology Research and Development, 53 (2), 67-83.
Don Tapscott (1998). Growing up digital: The rise of the net generation. New York: McGraw-Hill.

Hankins, S. R. (2005). Personal communication.

Jacob, R. J. K. (1991). The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look At is What You Get. ACM Transactions on Information Systems, 9(3), 152-169.

Jones, B., Valdez, G., Norakowski, J., & Rasmussen, C. (1994). Designing learning and technology for educational reform. North Central Regional Educational Laboratory. [Online]. Downloaded January 12, 2006 from
Keefe, W. (1979). Learning Style: An Overview in Student Learning Styles. A Publication of the National Association of Secondary School Principals.

Kolb, D. A. (1976). Learning Style Inventory: Technical Manual. Boston, MA: MCEer.

Lamb, G. M. (2005). How the Web changes your reading habits. The Christian Science Monitor, June 23, 2005. Downloaded January 12, 2006 from

McLuhan, H. M. (1962). The Gutenberg galaxy: the making of typographic man. Toronto, Canada: University of Toronto Press.
National Reading Panel (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its application for reading instruction. Washington, DC: National Institute of Child Health and Human Development publication.

Nielsen, J. (2000). Eyetracking study of web readers. Downloaded January 11, 2006 from

Schlechty, P. C. (1997). Inventing better schools: An action plan for educational reform. San Francisco, CA: Jossey-Bass.

Stice, C. F. & Dunn, M. B. (1985). Learner styles and strategy lessons: A little something for everyone. A paper presented at the Annual Meeting of the Southeastern Regional Conference of the International Reading Association. Nashville. TN, November 1985 (ERIC Document Reproduction Service number ED 271 721).
Prensky, M. (2001). Digital Natives, Digital Immigrants, Part II: Do They Really Think Differently? On the Horizon, 6, 1-9. Downloaded January 11, 2006 from,%20Digital%20Immigrants%20-%20Part2.pdf

Prensky, M. (2005). Listen to the natives. Educational Leadership, 63 (4), 8-13.

Rayner, K. (1978). Eye movement in reading and information processing. Psychological Bulletin, 85, 618-660.

Rayner, K., & McConkie, G. W. (1976). What guides a reader's eye movements? Vision Research, 16 (8), 829-837.

Richardson, M.J. & Spivey, M.J. (2004). Eye-tracking: Research areas and applications in Encyclopedia of biomaterials and biomedical engineering, Wnek. G. & Bowlin, G. (Eds.). Marcel Dekker

Salomon, G. (1984). Television is "easy" and print is "tough": The differential investment of mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology, 76(4), 647-658.

Schroeder, W. (1998). Testing web sites with eye-tracking. Downloaded January 11, 2006 from

Stampe, D. M., & Reingold, E. M. (1995). Selection by looking: A novel computer interface and its application to psychological research. In J. M. Findlay, R. Walker & R. W. Kentridge (Eds.), Eye movement research: Mechanisms, processes and applications (pp. 467-478). Amsterdam: Elsevier.