Tag Archives: pitch

Cartoonclopedia

Learning disabilities affect many students in the United States – 5% of the students enrolled in public schools have been officially diagnosed with at least one LD while many more go undiagnosed. There is a certain amount of stigma attached to this label with many individuals mistakenly linking learning disabilities to a lack of intellect or a manifestation of autism (NCLD). In reality, students with LD’s are of normal or above-average intelligence, they simply have trouble receiving, processing and storing information that other people do naturally. While some schools, mostly private and charter, attempt to address these disabilities by varying the teaching methods in the classroom, public schools make little effort in mitigating the struggle that these students face. Standards and test-based pedagogy dominates the public system, leaving little room for alternative methods including visual and oral teaching. In order to assist this under-represented demographic, and to make reading more fun, I’d like to propose a new web and app-based tool to improve youth literacy. Cartoonclopedia would bring together illustrators, artists and cartoonists from around the world in order to build a database of drawings for as many words in the English language as possible.

When a student encounters a word they don’t understand while reading, they’re expected to go look it up in a dictionary. In our current era of ubiquitous tech use, it’s much more likely that the student will look up the word online. If you Google a word with the term “define” in front of it, the first search result is a definition. Furthermore, resources such as Dictionary.com and Thesaurus.com offer free definitions for anyone seeking them. Unfortunately, these sites require a revenue stream of some sort, so students get advertised to in exchange for finding a definition for a word. Additionally, both dictionaries and online databases rarely offer context for the definition. Occasionally the definition is followed by a sentence with the word in it, but at least in my experience, I’ve found that the sentences have obscure references and would certainly be difficult for a younger student to understand. Visual learning is a vital tool in children’s pedagogy; children’s books, computer games and instructional toys all illustrate the importance of a visual aspect in apprehending basic reading and writing skills. Even Vannevar Bush states that the human mind “operates by association,” and by offering an image for students to refer to when thinking of a word, we’d create a link to make comprehension easier. Few of these tools for improving a child’s literacy are free, so this restricts their usage to children of middle and upper-class families. Hence, students with LD’s tend to be from lower-income families. This is due to the fact that it is possible to substantially mitigate the effects of LD’s with instruction, but this requires equally substantial time and financial resources from the parents or school. What is missing in this market for instructional tools is a free resource to be used both at home and in school that would help all students with different learning types understand new terminology: Cartoonclopedia.

Cartoonclopedia would be a prime example of what can be achieved through crowd-sourcing in our constantly more connected and globalized world. The Oxford dictionary has about 170,000 words in it and it is generally assumed that our language uses about 250,000 words, so this would have to be a globally collaborative project. I’d aim to begin the project with an educational grant of some sort, whether it be from the government or a foundation which would allow the project to pay the first illustrators to contribute substantial amounts of drawings to the database. The framework for the website and mobile app would be relatively simple and cheap to construct, it wouldn’t need complicated user interfaces nor functionality. In the long run, it’d be ideal to have user accounts so that instructors and students alike could build their own lists of words. Teachers could compile a list of illustrations for the words they thought were the most challenging or recurring in a text which then students would look over before and while reading. Students could save the words that gave them the most trouble in order to return to them later and consign them to memory. In the short run, however, simple access to the database would be more than substantial. We would conduct studies including both surveys to teachers about what terminology their students found most challenging as well as usage of scholarly tools to identify the most commonly used words in school texts to base the beginnings of our database from.

One of the most exciting aspects of Cartoonclopedia is that, since it’s a global collaboration, the illustrations will be from artists with all different types of cultural backgrounds. The project won’t attempt to standardize the type of illustration, we’ll only check to make sure the content is both appropriate and intelligible to younger students. This will result in the proliferation of many different cultural values and traditions: the New Yorker-style illustration about what the word “chore” means might be contrasted with a South American or Indian illustrator’s interpretation of the word “work.” The individuals portrayed in the cartoons will often wear the garb of their own country and reflect their own traditions, as the illustrator won’t necessarily be American at all. In a world where many cultures are being forced to collide at a rapid pace, this would aid the new generation’s understanding and acceptance of the diverse population around them. It will help guarantee that cultural “knowledge evolves and endures throughout the life of a race rather than that of an individual,” (Bush). Cartoonclopedia would both help students with disabilities have a visual resource to understand what they’re reading, as well as give all students a break from reading to look at something fun (and often funny) while still instructive.

While some picture dictionaries do exist, they’re not widely used, and Cartoonclopedia would work to bring the resource to any individual’s fingertips. The concept behind an illustrated word bank is reflective design, where we see vocabulary, “as alterable rather than immutable.”(Kraus) The images we associate with certain words come from our surroundings: social and media inputs. Cartoonclopedia seeks to furnish students (and all users) with a range of worldly perspectives about what words mean, allowing them more breadth of understanding. Let’s learn and laugh together!

vocabulary_cartoon_demo-1

 

Works Cited:

“What Are Learning Disabilities? | Learning Difficulties.” National Center for Learning Disabilities. N.p., n.d. Web. 13 Nov. 2014.

Bush, Vannevar. “As We May Think.” The Atlantic. Atlantic Media Company, 01 July 1945. Web. 09 Nov. 2014.

Kraus, Kari. 2013. Bibliocircuitry and the Design of the Alien Everyday (2013), The New Everyday, Textual Cultures.

Stacks

Stacks is an application for mobile devices and computers that allows users to stream audiobooks. Users can stream any audiobook from an in-app library of content. Each book stream consists of audio, which is the reading of the text, and a visual component created by the app. While users listen to the audiobook they view a computer-generated animation that matches and complements the story. This animation is made by a computer’s analysis of the pitches, tones, and frequencies of the reader’s voice transferred into animated fractal art with a multitude of textures and colors. Through this audiovisual presentation of a book, reading novels becomes a social experience. People can “watch” books together the way they watch television shows or movies. In an academic application of the program, using two senses to experience reading a book increases attention rate during reading and results in a better absorption of the text. The application is therefore appealing to students as well as book clubs, friends, or families looking to experience reading together. Stacks makes reading a more social, more engrossing experience for users while still conveying the entirety of the original text.

The visual aspect of Stacks is a defining characteristic of the application. Fractal art, very aesthetically pleasing and specific to the text, is the backbone of the visual component to Stacks streaming. Made by computer software, it is essentially the visual representations of math equations. Because fractal art is made using specific mathematic formulas, the exact ratios and intricate detailing of the pieces are very pleasing to the eye. Fractals, while formulaic, can be presented in an infinite number of ways through variations in color, texture, and equations used in the software. Stacks uses the sounds of the audiobook to create an animated fractal that matches the verbal variations of the reader, complementing the text. While there are already programs that use sound to create visual animation, such as the Visualizer option on iTunes, these are often comprised of just flashing lights on a black background and are of little variation. In order to create Stacks, we would need to make a software that uses audio to create fractal animation. Once this software is created, Stacks will be easily made using streaming technology already present in Netflix and Spotify and distributed on the Apple and Android App Stores and Google Play, as well as downloadable for computers through a website.

I recognize that a number of books have already been brought to life through a combination of audio and visual aspects for audiences to enjoy. These take the form of movies or television shows based on stories from books. While these do share Stacks’s goal of bringing books to a wider audience in a new format, there are drawbacks to Hollywood interpretations of books that Stacks solves. First of all, movies are expensive to go see or buy. Users may stream any book for free with their monthly Stacks subscription, making it an economically smart alternative to seeing a film interpretation in theaters or even buying a physical copy of the book. Second, only a very slim percentage of books get made into movies, and almost all of the ones that get theatrically produced are fiction novels. If a reader wants to visually experience a little-known book of poetry, they are not going to find a film version of the book to enjoy. Even if the desired book does get made into a film, it takes at least a year and a half to produce and release a film before a viewer can pay up to fifteen dollars to see it in theaters. Stacks can quickly generate fractal animations for audiobook files, which accompany most books very soon after the print edition is released. Lev Manovich discusses automation of new media in his book The Language of New Media. He names the many talents of computer programming today, such as the ability to “automatically generate 3-D objects such as […] ready-to-use animations of complex natural phenomena” in support of the idea that automated creation means “human intentionality can be removed from the creative process, at least in part” (32). By taking away the human element of the visual creation the animation is left to work with the creative piece that is the book itself, and allows for a much quicker turn around of products than if someone made a film for every book ever made.

The element of discussion surrounding texts is often dropped after a person graduates from high school or college and stops reading books for academic purposes. Stacks introduces a social element to books that will make their experience much more appealing to the masses.  When a person reads a physical book or an ebook, it is a quiet solitary experience that is very individualized. When the book is taken out of the inside of one reader’s head and projected onto a screen with an audio component, reading a book becomes a social activity. Stacks allows people to hear stories together the way they bond over television shows. With the opportunity to pause between chapters or at any time during the text, Stacks creates collaborative discussion. Everyone is on the same page, or rather, at the same part of the Stacks stream. By bringing back this discussion-provoking reading environment, people get much more from the texts than if they had read it alone.

According to Hayles’s How We Think, “the sheer onslaught of information has created a situation in which the limiting factor is human attention” (12). Students of all education levels, from elementary school to college level and beyond, have a difficult time focusing on reading texts due to the stillness of the activity, the need for a quiet setting and the act of thinking only about what they see on the page, ignoring all other senses. Stacks focuses the user’s attention and makes any text easier to absorb by utilizing two senses: sight and hearing. With two senses working at once the brain processes the text in two different formats, which develops stronger connections in the brain and makes the text easier to absorb. This creates an experience that fits Hayles’s definition of a “close reading,” which “correlates with deep attention, the cognitive mode traditionally associated with the humanities that prefers a single information stream, focuses on a single cultural object for a relatively long time” (12). Close readings and deep attention are brain functions that are being used less and less in our daily lives, but Stacks brings it back to the surface by incorporating it into our already technology-driven lives.

As far as marketing goes, Stacks would benefit most fro advertisements on websites like SparkNotes or Shmoop, which are centered around reading and literature analysis, as well as websites like College Prowler and Rate My Professors, where high school and college students are always online. The logo is the word “Stacks” in bold lettering, with the vertically straight part of the “k” made of a pile of books and a fractal web of lines inside the “a.” Advertisements will read: “Stacks: Stories for the Senses.”

Screen Shot 2014-11-17 at 9.08.32 AM

Works Cited

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Manovich, Lev. The Language of New Media. Cambridge, MA: MIT, 2002. Print.

Mirrors

The product that I would like to propose to the class is one that prizes versatility. It does not serve just one purpose. Its functions are so vast that it becomes a Swiss army knife for readers and writers. It can help organize, clarify, correct and inspire those who use it. What makes this possible is an intelligence that is ever growing and can adapt to one’s needs as a writer. I am talking of course about using AI technology to the best of its ability. It will act within the word processor itself making the writing process more efficient. The actual product though will take the form of both a software program containing the AI and will hook up directly to a pair of glasses that will aid the writer’s efforts. I will call this new invention “Mirrors” because it is a reflection of the individual’s thoughts during his or her writing process.

Much of this product is inspired by tools that already exist. Search engines always put the most relevant and popular searches at the top of the list. Google even makes recommendations for what the user wants with the “I’m Feeling Lucky” button. Typing in key words also triggers a drop down with a variety of options for other popular searches involving those words. The AI in my device will act in a similar manner. As the user writes, the AI will provide a number of references and sources for the writer to utilize if he or she wishes. The writer may incorporate quotes from well-known books or movies. In these instances the program will automatically recognize the words and provide quick references from the internet for the writer to use so that the writer does not have to search for the reference him/herself. In addition, the program will also pickup on the subjects that the writer discusses. Sensing key words like “pharaoh” or “pyramids” will result in a dropdown list of available resources that have information on Ancient Egypt. Kirschenbaum criticizes that some technologies force the user to switch between “different screens or interfaces” which he calls “modes” but as technology increases, these transitions are made “as invisible and seamless as possible” (Kirschenbaum, 8). Although these technologies exist, the program streamlines the process by making the searches by itself while the writer is still in the word processor. Hayle makes claims that the research process for scholars has changed due to the inclusion of digital media and states that “the main advantages are worldwide dissemination to a wide variety of audiences, in many cases far beyond what print can reach” (Hayle, 3). Similar to how the efficiency of digital media expedites the research process by eliminating manual searches through book shelves, the AI removes the manual search on the internet.

The real innovation here, however, is the AI’s ability to predict and think about the content of what the writer is expressing. As the user continues to write, the AI will also sense writer’s block if the computer experiences little activity. At this point, the AI will attempt to make sense of the writer’s thought process and come up with a number of words that can best complete sentences if the writer stops midway in a sentence. Technology these days are “a material matrix against which we manufacture an ongoing array of haptic, affective, and cognitive engagements” (Kirschenbaum, 10). Essentially, the tools a writer uses should be just as engaged in the writing process as the writer. If the writer stops after a complete thought, the AI will even give suggestions and comments on the flow of the paper as well as make valid recommendations on the possible next step by searching for content relevant to the paper on the internet. It is important to note though that the AI is not attempting to replace the writer but simply give inspiration for the writer’s next move.

While the AI helps with the cognitive side of the writing process, the glasses serve a more physical use. While I already see the connection between computer screens and eyewear in the form of Google Glasses, my glasses will specialize in the writing process by utilizing scanners. The glasses will scan the user’s pupils and sense their movements. Through this, the glasses can detect what lines the user is reading. I often sometimes lose my place while reading or reread what I just read to help get a better understanding. The glasses will sense perplexity or confusion if my eyes dart up and down trying to find the previous line. The lenses in my glasses will act as an additional screen that will highlight what I have already read using a yellow tint and use a green line to highlight the last line I just read. If it senses that I am reading the same lines over and over again, the AI that is connected to both the computer and the glasses will kick in. At this point the AI will attempt to reword or restructure the sentences to better help the user’s understanding.  If the reworded sentence is still awkward, the writer can always refresh for more options that the AI came up with. At the same time, the glasses are a form of engagement for the user making the writing experience so immersive that they do not lose focus or concentration. Because the scanners are constantly fixated on the user’s eyes and need to correlate the eye movement to the words on the screen, there is almost no room for distraction for the user. The document and the user become one blocking out any outside distractions. When the user turns his or head away from the screen, the lenses in the glass disengage and wait to be reconnected. In a similar manner, George R.R. Martin claims that his “secret weapon” for combatting distraction was WordStar, a basic and archaic program that “accounted for his long-running productivity” (Kirschenbaum, 6). Although Martin’s method was to utilize a basic program to block out his environment, I want to have the same effect but move forward with technology in the form of Mirrors’ glasses.

These tools can be used by both professional writers and the everyday user. Due to its simple user interface, it is easy to use and learn by all. In fact, there is very little to learn as the AI does not need commands to function. Therefore, the AI will take care of the processes by itself resulting in a practically non-existent learning curve for this product. All the user needs to do is connect the glasses so that the user’s computer recognizes the device and follows the guided installation for the program. After these simple steps are completed, the user will be utilizing the program with quick succession. With its user-friendly interface, I plan for my product to target a multitude of different customers. The main consumer base will be comprised of students ranging all the way from elementary to college level. This tool would also be effective for businessmen and white-collar workers in general. While its functions serve many purposes, I intend to market the product for educational or work-related uses.

Because this product is both software and a physical object, there will be two different processes for manufacturing. The program will obviously be worked on by computer science professionals who will code and distribute it digitally or put the data on discs for those that want a physical copy. The glasses, on the other hand, will be manufactured by a number of engineers who will carefully put the lenses that will act similarly to a computer monitor. While the majority of the glasses will be made of simple plastic materials, the engineers would have to incorporate putting in certain technology that will help the glasses connect to a computer wirelessly as well as scan the user’s eyes to detect movement.

As with most other products, much of the success can be attributed by the marketing. Wanting to associate my product directly to writing, my catchphrase will be “Mirror, mirror on the wall, who will answer the writer’s call?” Going back to the meaning behind its name “Mirrors”, the logo will be comprised of the famous statue of “The Thinking Man” inside a mirror. This will further enforce the idea that Mirrors is supposed to help guide the thinking and writing process. Advertising will be restricted to a few TV commercials but will not involve anything too elaborate. I believe hearsay and good critic/customer reviews will be enough publicity. That being said, the commercials will be strictly informative and will rarely utilize humor or other methods of marketing. A website will launch explaining the key features and give examples of its usage. I would rather the consumer research and evaluate the product themselves because the ones giving the time to do so will be the kind of people interested in buying my product. Any other marketing would be unnecessary expenses. Because of its technological nature, it would only make sense to have this product be found in stores such as Best Buy. Another outlet would be Staples because it stresses work and school related usage.

mirrors

As the consumers will soon see, Mirrors is a product that values professionalism while still maintaining a certain openness through simple user interface. Just as Microsoft Office is a staple product used by millions, I want the same for Mirrors. A couple years after its initial release, I would hope that Mirrors finds itself on every home computer or school/work environment.

 

 

 

 

 

 

 

 

 

 

 

 

Works Cited

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago:         U of Chicago, 2012. Print.

Kirschenbaum, Matthew. Track Changes, A Literary History of Word Processing. N.p.: Harvard                       UP, 2014. Print.

KindleQuest

In a day and age where we are confronted with thousands of images, the “attention span” of society is rapidly declining (Hayles 87). With students losing focus in school as a result of outside activities that focus less on comprehension than previously, our nation is in danger of fostering a generation with a sharp decline in productivity. Although we are experiencing a “paradigm shift” to more digital technologies (Hayles 1), our school system is set in its ways and is less willing to adopt new methods. Our nation is changing and we need to change with it.

So how do we address the growing disparity between what students want and what teachers want? I propose an e-reader embed that will engage the students in their reading by prompting them with meta-cognitive questions and generating study tools to further their learning, even outside the classroom. Like the questions below, the goal of them is to focus on improving students’ reading comprehension and analysis skills:

  • Why is this important?
  • Are there any key terms or ideas here?
  • Why is this date significant?
  • What have I learned about this in the past?

These questions will occur as the student is reading to promote a new engagement with digital text. Although many teacher’s make study sheets with questions to consider, this embed is different because it occurs through a digital medium and the teachers are able to either go with the pre-written set of questions or customize the questions for the students. With a log-in code within the ebook, the teacher can add, delete, or modify questions. Then, the students will be able to login to their previously purchased version of the book and engage with their questions while reading. The embed will then compile the key terms that the student entered or highlighted, generate a graphic organizer of important themes or ideas, and create a student-based outline or notes. With Wifi connectivity, the student will be able to print his/her outline or notes to a printer. Fostering a new relationship between “form and content,” the embed will allow for a new interactivity with text (Kraus 82).

Building on teachers’ outlines, this “reflective design” will allow for a new level of critical thinking that is required in a digital age (Kraus 78). Additionally, this technology creates a new meshing of analytical thinking with the text, as previously created outlines or study guides are separate from the text. Society is ripe for change and the current technological advances demonstrate the need for more productivity, which is exactly what the embed will allow students to do. Although the embed is applicable to students from grades 2 or 3 and up, students with learning disabilities will find that the embed improves their analytical skills without detracting from what the material is saying.

Once Kindle obtains the rights to the licensing then they will be able to contact the author directly and ask if he/she would like to add or create any questions to be embedded within the framework of the novel. Regarding additional settings, there will be an on/off button for teachers to tell their students to press and then the students’ outlines will be sent to the teacher directly. This can be helpful for students that need extra help or attention to make sure they are on the right track. One of the most unique features of the embed is the extensibility that is provided within an sms embed. Students are able to flag parts of the text and create a conversation within the novel. A virtual book club, or twitter-like sphere, the sms feature creates productive and intellectual conversation within a class. The sms feature is aimed to help increase understanding among all students and bridge the gap between those who are having trouble understanding the material and those who are not.

A specific target audience that the embed would appeal to is students with learning disabilities or attention disorders. Since the embed allows for less diversion of attention from the material, children afflicted with ADD or ADHD are more likely to gain deeper appreciation of the material. Students with learning disabilities may find that the material comes easier to them when they are able to read and analyze all at once. On a personal level, I have executive functioning disorder so this embed would improve my reading skills because I wouldn’t have to divert my attention from the text. Sometimes when I am bent on analyzing a specific part of the text, I forget about the whole and what I just read. Therefore, this embed would allow be to think critically about the text in a targeted way that isn’t overwhelming.

In order to manufacture this embed I would need a team of technicians to ensure the embed is compatible with a variety of digital mediums including, but not limited to, the Kindle, iBook, and Nook. First creating a platform where the text is able to be adapted to include questions, the technicians will have to understand the capabilities and specificities of each platform before attempting to change them. Then, the technicians would work with authors, literary historians, and teachers to create important questions for the book. With this, the technicians will compile the information that the student entered into an outline. Ultimately, there would be an algorithm for this, but for our project I would like to do a mock up of what a student entered and what is generated as the outline.

It is my personal belief that it will be best to start with the Kindle, since it has the largest user interface. For my project I would like to create an embed for a Kindle that focuses on one specific book. By doing this we would prototype the interactive book to then show as an example for future books. This would make the project manageable for a group of college freshman. However, once we finish the project we can take it to a team of technicians and get them to write a general algorithm for the process of compiling the information.

In terms of selling our idea, we would sell the copyright to Kindle, iBook, and Nook and they can individually embed the format into their compatible software. Once they purchase the embed we will be able to do joint advertisements that advertise the embed within the framework of a specific company. Consumers will be able to purchase either the regular edition or the special embed edition from within the Kindle framework.

As far as marketing goes, we would target teachers, educators, and students alike in order to get our product out there. A specific logo and slogan I have designed is: Read with us. Achieve with us. (as pictured below).

Screen Shot 2014-11-09 at 5.46.55 PM

Another slogan I thought of is the one below:

 

Screen Shot 2014-11-09 at 5.50.57 PM

 

And finally:

Screen Shot 2014-11-14 at 6.19.03 PM

 

The design of the embed itself “[confronts] the affordances” of digital technology and combines them with critical thinking skills (Kraus 84). Therefore, the logo should inspire society to do better and think better, which both of the previous logos do. Vannevar Bush said that science “has provided a record of ideas and has enabled man to manipulate and to make extracts from that record so that knowledge evolves and endures throughout the life of a race rather than that of an individual” (Bush 1). With the creation of this embed, knowledge will be able to evolve and change based on society’s growing reliance on technology. And isn’t that really what progress is about: the change and evolution of an idea? The book is a seed and this embed is the tree that grew from it.

Works Cited

Bush, Vannevar. “As We May Think.” The Atlantic. Atlantic Media Company, 01 July 1945. Web. 09 Nov. 2014.

Hayles, Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: U of Chicago, 2012. Print.

Kraus, Kari. 2013. Bibliocircuitry and the Design of the Alien Everyday (2013), The New Everyday, Textual Cultures.