A FEW THOUGHTS ABOUT VISUAL DIGITAL LANGUAGES

Summary of what I think I have read about the history of our written languages:

  •     Keeping economic records was a distinct process different from visual strings of symbols to tell readers what to say.  Early writing was but a script for talking.  It took a long time for them to put spaces between words, as in speech the phonemes continue and the sounds of separate words are not distinct. The first people observed reading silently were viewed as mad. With silent reading the written word slowly lost it phonetic character.  I speculate the the first unit utterances were what we today call sentences, with explicit attention to words coming later.
  •     What a person experiences when reading can be quite varied, depending on the person’s imagery ability and the type of literature.  As far as I know, even today the psychological study of reading asks what was the MEANING of what was read and very seldom asks what did you EXPERIENCE when reading. Having no visual imagery I have no visuals reading anything. I did informal research with my Intro-Psy students.  I wish I had taken real data. Most people report experiencing strong visuals when reading descriptive literature; although the variation of the type of imagery varies greatly.  When I asked students what they experienced when reading highly conceptual literature (philosophy) I was surprised when almost everyone replied NOTHING. Conceptual literature usually doesn’t stimulate associated visuals, but sometimes distracting visuals. For example “field of physics” resulted in fields of flowers. Everyone I encountered who were both strong visualizers and yet sought out and enjoyed highly conceptual literature had discovered a way to either suspend visual imagery of have a static image. None were aware they did this until I questioned them.  I realize that these facts have never been made public.
  •     Economics squeezed writing into linear strings – with different languages using different variations.  One variation of a new Visual Digital Language (VDL) would have the equivalent of one sentence per screenful.  Imagine the size of a book with one sentence per paper page. Everything is squeezed together to conserve paper (and papyrus before that).   In early times in most cultures, writing and reading were used by only a privileged elite. To many, reading must have appeared “magical” and the WORD sacred or demonic.
  •     According to  Jay David Bolter in Writing Spaces, early manuscripts were hypertext.  Marginalia to marginalia. .  A long time ago I experimented composing in Bolter’s StorySpace app, but there was no one to read it. I learned about hypertext and online communication on the same day in the Futurist magazine. But, no one will read a hypertext docs.  Hypertext is mostly used to link docs in linear text. A Wikipedia file is really a linear doc with a click-able table of contents and linked footnotes.  I had (maybe still have) a version of Writing Spaces in StorySpace, where it was intended that you get lost!  You had to work to find the TOC.
  •     I have no direct experience with ideographic languages. I am told a Chinese text doesn’t tell the person what to say. The symbols are said to stimulate experience with an abstract concept, for which the reader chooses what to say.  Two readers of ideographic script will speak differently. I expect that many ideographic readers develop a habit of what to say.  I have read that sign language that emerges like Creole (not taught to a community of deaf children, but which they invent) is ideographic.
  •     Human children first learn auditory languages and next reading aloud or watching others read creates a process where even silent reading is processed through the verbal language parts of the brain.  This greatly slows down the visual reading process.  I am not sure how speed readers do it, or those savants who can process a whole page at a glance (if this is truly a fact). But, most visual readers process what they read through the speech circuits of the brain. A VDL would not directly involve speech circuits.
  •     Did you have to diagram sentences in elementary school?  I did. There was a visual structure that spread the sentence into 2 dimensions, where the structure represented the syntax. But, we never tried to read text where sentences were diagrammed.  Why not?  This is what I see as the start for a new VDL. Arrange the words of a sentence on a page where their pattern represents syntax and where a reader learns to comprehend the sentence at a glance, scanning it like a diagram.  Symbol properties, such as color, size, and font can symbolize syntax and emphasis.  Background lines or curves could add information.
  •     I once had a rescue dog called Rebus.  Later I learned that a rebus is a picture substituted for a word in a sentence.  I “remember” having “read” some in my youth. Acronyms are a form of rebus.  With the special facility of wordprocessors, I would like to see more types of punctuation. Different types of commas and other marks, and a greater use of [{(___)}] . Sentences could have some of the nested structure of mathematical expressions – as the history of mathematical expressions started with abbreviations of sentences!

Here I am mixing two different types of change for visual language.  New punctuation, etc. to alter traditional linear text to make its structure better fit the syntax is small step.  The other is to spread the words, rebuses, and other symbols (including new ideographic – maybe even dynamic little video loops) on a 2D sheet.  Maybe the sheet could be spherical and you could rotate it.  With 3D visual networks, such as with MyBrain, one could structure it in 3D.  But, let it start simple.  Standardized patterns would be necessary and how to create the best standards would be a long and iterative process – with maybe a set of standards.

Another variation would be having the visual page emerge over time, different symbols appearing in sequence.  Why not have sounds accompany the appearance of symbols.

  • [SIDE NOTE – we comprehend better and read faster if we read to the text being spoken.  We can teach ourselves to read very much faster by accelerating the reading speed, while keeping the tone level from rising.  I always intended to learn this.  For a while I would have everything I read processed via TextAloud. It is a great way to proof read.]

This needs to be researched by a coordinated social media project – I know there are major research projects coordinated online.  UPLIFT will use them extensively. Given all the other things I must do with the few years I have left, being part of this development would consume too much time.  A while back, and somewhere in my archives, are a few attempts to create digital frames for emergent sentences in 2D.  I do believe that we need to move to a temporary new visual language that incorporates hypertext and outlines.

CRITLINK was a platform developed by the nanotech crew with Eric Drexler.     He wanted a hypertext composing tool, for the internet.  Every web page to bring to your computer was processed through a CRITLINK server and had the ability for you to add coded links/buttons anywhere on the screen – and then compose a page for that link.  Others loading a CRITLINK processed web page would see the links and be able to click to the docs you attached. Unfortunately it never caught on, nor is “composing in hypertext” and acknowledge need. YouTube tried, a while back, to have a feature where you could attach text comments to any part of the video sequence.

The new visual digital language should have a way for every “reader” to mark-up the page, with comments and links.  A new reader could see a list of other readers and select to view with markups from any of the prior markers.  I would like this for eBooks, and would like to read eBooks that have been annotated by others I respect. An economy could be built around this, paying for the use of annotations.

I also imagined every page having a border.  The border would contain symbols and links relating to where that text fit in a larger network. Some metadata could be coded in the border. The border could change.  If one moved the cursor over the page the border could change – much as small windows might appear when the cursor moves over a spot.

How multiple pages can be viewed concurrently is also important. I hate it when a linked page replaces the page I was reading.  Long, long ago there was an app, before we had graphics, called TORNADO.  You could quickly create a mosaic or array of many small windows, containing a few sentences.  Neil Larson invented HOUDINI which created a network of terms.  There were three columns of words. The left column had words pointing to THE word in the center column. The right column had words pointed to from the word in the center column.  You would just add new words to the center column and then select words from the right and left column to link it to.  If you typed a word already in the system you would be referred to it.  Truly massive webs of words could be created, revealing very interesting patterns. This was before computer graphics, but you could print out the networks. Graphics made extinct many useful features.

Diagrams where there are words in them is a type of visual language.  They are scanned, not read.  Many people don’t know how to scan diagrams and are as afraid of them as they are of equations. Graphs are a special type of diagram. It is astounding how many who make graphs fail to make them useful to those not already familiar with them.  We think we live in a visual world, but it astounds me how such simple graphics is avoided in the media.  Is there a graphics phobia? Humans need instruction in using diagrams and graphs as much as learning to read and write.

Super complex mind maps, are not what I view as a visual language. They are charts.  I wish I had time to learn to use VUE (Visual Understanding Environment), which permits different kinds of links and links can also be objects.  With VUE one crafts text.  I would like a similar tool for creating hypertext docs while the ideas are emerging in my mind.  None of the mind mapping apps appeal to me, with their imposed structures.  I would like to have input speech instructions to my computers along with finger/keybaord and touch input.  I want to tell my computer to use RED and size 14.  I know it is possible, but I expect I would have to create it for myself.

Visuals can be very useful in navigating, as a map, TOC, or diagrammatic index. The gestalt experience of a visual is like experiencing a meta-word – experiencing like a mammal, as reported by Temple Grandin (autistic).  Language communicates the CONCEPTUAL and requires digital structure. Experiencing a graphic like a painting I feel will never adequately convey the conceptual. The conceptual is beyond and complementary to the visual experience.  The experience of a word in isolation or an ideogram can involve the intuitive/emotional/aesthetic.  There may be visual geniuses who can have such gestalt experiences with more complex visual mind maps, but we cannot build a visual language for most people this way.

Those popular visuals where words in a piece of text are shown with size indicating frequency of the words is cute.  Maybe people can learn to interpret them.  Tight clusters of words could be used instead of a single word.  Parts of the page could be zoomed into. Collaborative composing in the new visual language is another feature – along with TEAM scanning.  Maybe audio recording the dialog during the scanning – to be kept as part of the surround for the text.

The above are just a few ideas about a new Visual Digital Language.  In time it may be a technology as uplifting as the written word was to speech.  We need our representations to both simplify and to enable navigation of complexity.  We need a first step towards a VDL that will help us share our ideas and knowledge in ways that facilitate our working collectively with them.

Author: nuet

01/24/1935. BS-physics RPI 1956; MS-physics UofChicago 1958; PhD-physics Yale 1965; PhD-Edu Psy Uof MInnesota 1970. Auroral Research Byrd Station, Antarctica 11/1960-02/1962. MINNEMAST curriculum dev 1964-68. Woodstock. faculty Pima Community College, Tucson 1974-1997. Transdisciplinary scientist, philosopher, educator, futurist, activist. PC user since 1982. "Wife". daughter, 2 grandsons. 5 dogs & 7 cats. Lacks mental imagery in all sensory domains.