DigLitWeb: Digital Literature Web

M-F | M-T

The Three Voices (Fig 3)  

F de Ferramentas Esta página web contém hiperligações para ferramentas informáticas de análise textual, edição textual, colecção digital, visualização de dados e colaboração em linha. Inclui-se ainda uma amostra de aplicações usadas na literatura electrónica.

T for Tools This webpage contains hyperlinks to software tools for textual analysis, textual editing, digital collection, data visualization and online collaboration. A sample of applications used for the production of electronic literature is also included.


NB: Excepto quando houver outra indicação, os textos de apresentação de cada ferramenta pertencem aos respectivos autores e editores, e foram transcritos do sítio web.

NB: Unless otherwise indicated, annotations on the selected tools belong to their respective editors and authors, and they have been transcribed from the website.


Cinemetrics [2006-present]


University of Chicago

Authors: Yuri Tsivian and Gunars Civjans

Cinemetrics is a tool that not only lets one record data to analyze movies as Yuri Tsivian did in his Intolerance study, but also publishes the gathered data on this web site for everyone to access. This is a collaborative project. The more data will be submitted by the users, the more will be available to them. We hope this will grow into a movie dynamics database useful to everyone.


CollateX [2010]


COST Action 'Interedition'

CollateX is a Java software for collating textual sources, for example, to produce a critical apparatus. As the designated successor of Peter Robinson's Collate it is developed jointly by several partner institutions and individuals under the umbrella of the European initiative "Interedition". We strive for a component-oriented architecture, where our users can mix and match parts of CollateX as they see fit for their particular usage scenarios. Currently, we offer three components:


Collex [2008]


University of Virginia


Collex is a set of tools designed to aid students and scholars working in networked archives and federated repositories of humanities materials: a sophisticated COLLections and EXhibits mechanism for the semantic web.

Collex allows users to collect, annotate, and tag online objects and to repurpose them in illustrated, interlinked essays or exhibits. It functions within any modern web browser without recourse to plugins or downloads and is fully networked as a server-side application. By saving information about user activity (the construction of annotated collections and exhibits) as “remixable” metadata, the Collex system writes current practice into the scholarly record and permits knowledge discovery based not only on the characteristics or “facets” of digital objects, but also on the contexts in which they are placed by a community of scholars.


combinFormation [2.6 Beta 1]

an instance of interface ecology


Texas A&M University

combinFormation is a creativity support tool that integrates processes of searching, browsing, collecting, mixing, organizing, and thinking about information. We believe the primary purpose of digital information is to support your creative idea generation. Images and text engage complementary cognitive subsystems. Each collection of information resources is represented as a connected whole. This promotes information discovery, the emergence of new ideas in the context of information. Temporal visual composition generates a continuously evolving informationscape.


Chronos Timeline [2010]


HyperStudio, MIT


Chronos Timeline is designed specifically for needs in the humanities and social sciences to represent time-based data. Chronos allows scholars and students to dynamically present historical data in a flexible online environment. Switching easily between vertical and horizontal orientations, researchers can quickly scan large number of events, highlight and filter events based on subject matter or tags, and recontextualize historical data. Chronos Timeline is one component in HyperStudio’s emerging Repertoire platform and can easily be integrated with other tools such as faceted browsers, maps, and visualization modules.





Datavisualization.ch Selected Tools is a collection of tools that we, the people behind Datavisualization.ch, work with on a daily basis and recommend warmly. This is not a list of everything out there, but instead a thoughtfully curated selection of our favourite tools that will make your life easier creating meaningful and beautiful data visualizations.



Digital Research Tools Wiki



This wiki collects information about tools and resources that can help scholars (particularly in the humanities and social sciences) conduct research more efficiently or creatively.  Whether you need software to help you manage citations, author a multimedia work, or analyze texts, Digital Research Tools will help you find what you're looking for. We provide a directory of tools organized by research activity, as well as reviews of select tools in which we not only describe the tool's features, but also explore how it might be employed most effectively by researchers.


DSpace [version 1.6.1, May 2010; 2002-present]




DSpace is the software of choice for academic, non-profit, and commercial organizations building open digital repositories.  It is free and easy to install "out of the box" and completely customizable to fit the needs of any organization. DSpace preserves and enables easy and open access to all types of digital content including text, images, moving images, mpegs and data sets.  And with an ever-growing community of developers, committed  to continuously expanding and improving the software, each DSpace installation benefits from the next.



Virtual Workplace for Social Sciences and Humanities [2009-present]


Huygens Instituut, Royal Netherlands Academy of Arts and Sciences

At the Huygens Instituut KNAW (a research institute for text edition and textual scholarship of the Royal Netherlands Academy of Arts and Sciences) a digital tool has been developed for the making of text-editions and for textual research in online working environments: eLaborate. In February 2009 a new version of this tool became available. In eLaborate, researchers can create websites in which they can work, individually or with a group of collaborators, on the transcription and edition of a text.



Data Visualization for the Web [2008-present]



Flare is an ActionScript library for creating visualizations that run in the Adobe Flash Player. From basic charts and graphs to complex interactive graphics, the toolkit supports data management, visual encoding, animation, and interaction techniques. Even better, flare features a modular design that lets developers create customized visualization techniques without having to reinvent the wheel.


Forging the Future [2006-present]



Forging the Future: New Tools for Variable Media Preservation is a consortium of museums and cultural heritage organizations dedicated to exploring, developing, and sharing new vocabularies and tools for cultural preservation. There are three practical and complementary tools that will be developed in this project: the Franklin Furnace Database (FFDB) is designed for cataloging variable media artworks and events contained in small to midsize collections of presenting arts organizations; the Digital Asset Management Database (DAMD) manages digital metadata that is directly relational to all the tools; the Variable Media Questionnaire (VMQ) contains data and metadata necessary to migrate, re-create, and preserve cataloged variable media objects. In addition, a number of tools have been developed to help these tools dovetail with each other and with other existing systems, including the Media Art Notation System (MANS), VocabWiki, and the Metaserver.



FREE Software Foundation EUROPE [2001-present]



The FSF Europe was launched on March 10th 2001 and supports all European aspects of Free Software; especially the GNU Project. We are actively supporting development of Free Software and furthering GNU-based Operating Systems such as GNU/Linux. Also, we provide an assistance centre for politicians, lawyers and journalists in order to secure the legal, political and social future of Free Software. Access to software determines who may participate in a digital society. (…) Therefore the freedoms to use, copy, modify and redistribute software - as described in the Free Software definition - allow equal participation in the information age.




General Public License



The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) [Version 2, June 1991, Free Software Foundation]


GZZ [2001-2003]


Director: Tuomas J. Lukka


Gzz is an open-source free software project implementing Zzstructure architecture. Zzstructure is an invention of Ted Nelson, and ZigZag is his trademark for it. ZigZag is no longer used in the name Gzz (it used to be used with permission).



Humanities Research Infrastructure Tools [2006-present]


Coordinator: Peter Shillingsburg

Center for Textual Studies and Digital Humanities [Loyola University, Chicago]

HRIT (Humanities Research Infrastructure and Tools) is a project supported by a Digital Startup grant from the NEH, with a research team led by Peter Shillingsburg. Its purpose is to build an open-source, collaborative, robust environment in which to aggregate, link or cross-reference, edit, and share vetted primary documentary texts--along with their scholarly enhancements, analyses, and commentaries, in the form of markup, annotation, keyword tagging, etc. It's based on a secure and integrated merged document, the CorText, which contains multiple variants of a given work and is amenable to standoff markup. The initial tools to be integrated into this environment include the standoff-markup Collaborative Tagging Tool (CaTT) that will enable humanist scholars to create sophisticated scholarly electronic editions and archives in a collaborative environment. The result will be an ecology consisting of the infrastructure, a group of initial on-line tools, and a model vetting system.


Information Aesthetics Weblog [2004-present]


Editor: Andrew Vande Moere (Katholieke Universiteit Leuven, Belgium)

Inspired by Lev Manovich's definition of "information aesthetics", this weblog explores the symbiotic relationship between creative design and the field of information visualization. More specifically, it collects projects that represent data or information in original or intriguing ways.


Interedition [2008-present]


Interedition is a COST Action; our aim is to promote the interoperability of the tools and methodology we use in the field of digital scholarly editing and research. There are a great many researchers out there in the field of textual scholarship. Some of you have written some amazing computer tools in the course of your research, and others of you could benefit greatly if these tools were openly available. The primary purpose of Interedition is to facilitate this contact—to encourage the creators of tools for textual scholarship to make their functionality available to others, and to promote communication between scholars so that we can raise awareness of innovative working methods.


Ivanhoe [2006]


Applied Research in Patacriticism, University of Virginia


Ivanhoe is a pedagogical environment for interpreting textual and other cultural materials. It is designed to foster critical awareness of the methods and perspectives through which we understand and study humanities documents. An online collaborative playspace, IVANHOE exposes the indeterminacy of humanities texts to role-play and performative intervention by students at all levels.


Juxta [version 1.6.5, April 2012]

Collation software for scholars


Applied Research in Patacriticism, University of Virginia


Juxta is an open-source cross-platform tool for comparing and collating multiple witnesses to a single textual work. The software allows users to set any of the witnesses as the base text, to add or remove witness texts, to switch the base text at will, and to annotate Juxta-revealed comparisons and save the results.

Juxta comes with several kinds of analytic visualizations. The primary collation gives a split frame comparison of a base text with a witness text, along with a display of the digital images from which the base text is derived. Juxta displays a heat map of all textual variants and allows the user to locate — at the level of any textual unit — all witness variations from the base text. A histogram of Juxta collations is particularly useful for long documents. This visualization displays the density of all variation from the base text and serves as a useful finding aid for specific variants. Juxta can also output a lemmatized schedule (in HTML format) of the textual variants in any set of comparisons.


Larsen, Deena

Rhetorical Devices for Electronic Literature



This is a coloring book/hornbook for electronic literature--it provides basic explanations of rhetorical devices (e.g., links, paths, navigation, images, sounds, etc.), simpler exercises for you to do and play with, and links to works that use these devices.

Deena Larsen


Lucene [version 3.6, April 2012]


Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform.


Many Eyes


Collaborative User Experience research group, Visual Communication Lab, IBM [2004-present]


Many Eyes is a bet on the power of human visual intelligence to find patterns. Our goal is to "democratize" visualization and to enable a new social kind of data analysis. All of us in CUE's Visual Communication Lab are passionate about the potential of data visualization to spark insight. It is that magical moment we live for: an unwieldy, unyielding data set is transformed into an image on the screen, and suddenly the user can perceive an unexpected pattern. As visualization designers we have witnessed and experienced many of those wondrous sparks. But in recent years, we have become acutely aware that the visualizations and the sparks they generate, take on new value in a social setting. Visualization is a catalyst for discussion and collective insight about data.



Visualize Many Networks Simultaneously [2009-present]


Human-Computer Interaction Lab, University of Maryland


ManyNets is a network visualization tool with tabular interface designed to visualize up to several thousand network overviews at once. This allows networks to be compared, and large networks to be explored using a divide-and-conquer approach. For example, comparing different social networks can provide insights into the underlying causes for their differences. Or an individual social network can also be subdivided into temporal slices, which can then be examined to locate temporal patterns or regions and periods change. Networks can also be subdivided and compared based on motifs (small patterns of connectivity), clusters, or network-specific attributes.



Transforming Humanities Education at MIT [2002-present]



The Metamedia project provides students and faculty with a flexible online environment to create, annotate and share media-rich documents for the teaching and learning of core humanistic subjects. Using the open standards-based Metamedia framework, faculty members further pedagogical innovation by building subject-specific mini-archives that extend the use of multimedia materials in the classroom and enable the formation of learner communities across disciplines and distances. Drawing on Metamedia applications as they research, develop, and collaborate on multimedia essays or in-class presentations, students improve their media literacy skills and gain a better understanding of how media influences their lives and shapes their interpretations. The result is increased skill at communicating effectively in today’s increasingly global world of education and business.



Iterative Exploration of Content-Actor Network Data [2006]


Human-Computer Interaction Lab, University of Maryland


Most visualization research on understanding relationships in large datasets implicitly assumes that a node link diagram is appropriate. However, we believe that while node-link diagrams have their place, they don't scale up well and too often produce cluttered overviews with few readable labels, and often have difficulties supporting even the simplest tasks such as reviewing the papers that cite a selected paper. In this research, we took a completely different approach in designing NetLens by using multiple simple coordinated views of ordered lists and histogram overviews to represent a Content-Actor model of information. Examples of Content-Actor pairs of interest to the visual analytics community include scientific publications and authors, emails and people, legal cases and courts, intelligence reports and countries, etc. In all those examples, both the content and actors consist of networked data such as reports citing other reports, authors having advisors or co-authors. NetLens shows paired networks of content and actors in coordinated views and allows users to refine their queries by transferring filtered data in one entity window to the other iteratively.


Node XL

Network Overview, Discovery and Exploration for Excel [2007-present]


NodeXL is a template for Excel 2007 that lets you enter a network edge list, click a button, and see the network graph, all in the Excel window. You can easily customize the graph’s appearance; zoom, scale and pan the graph; dynamically filter vertices and edges; alter the graph’s layout; find clusters of related vertices; and calculate a set of graph metrics. Networks can be imported from and exported to a variety of data formats, and built-in connections for getting networks from Twitter, Flickr, YouTube, and your local email are provided.


NVSS 1.0

Network Visualization by Semantic Substrates [2006-2008]


Human-Computer Interaction Lab, University of Maryland


NVSS 1.0 (Network Visualization by Semantic Substrates) enables users to specify regions to place nodes, and then control over link visibility.


omeka [version 1.5.1, April 2012]


Center for History and New Media, George Mason University


Omeka is a web platform for publishing collections and exhibitions online. Designed for cultural institutions, enthusiasts, and educators, Omeka is easy to install and modify and facilitates community-building around collections and exhibits. It is designed with non-IT specialists in mind, allowing users to focus on content rather than programming.

Omeka will come loaded with the following features:


OAC: Open Annotation Collaboration [2009-present]


Center for Informatics Research in Science and Scholarship [University of Illinois at Urbana-Champaign]

The overarching goals of this project (consisting of multiple phases) are:


<oXygen/> XML Editor [version 13.2, January 2012]



<oXygen/> is a complete cross platform XML editor providing the tools for XML authoring, XML conversion, XML Schema, DTD, Relax NG and Schematron development, XPath, XSLT, XQuery debugging, SOAP and WSDL testing. The integration with the XML document repositories is made through the WebDAV, Subversion and S/FTP protocols. <oXygen/> has also support to browse, manage and query native XML and relational databases. The <oXygen/> XML editor is also available as an Eclipse IDE plugin, bringing unique XML development features to this widely used Java IDE.


A Periodic Table of Visualization Methods [version 1.5, 2010]


Authors: Martin Eppler and Ralph Lengler


Processing [2003-present]


Co-founders: Ben Fry and Casey Reas

Processing is an open source programming language and environment for people who want to program images, animation, and interactions. It is used by students, artists, designers, researchers, and hobbyists for learning, prototyping, and production. It is created to teach fundamentals of computer programming within a visual context and to serve as a software sketchbook and professional production tool.


RiTa: a software toolkit for generative literature [beta version, 2009]


Author: Daniel C. Howe

RiTa is an easy-to-use natural language library that provides simple tools for experimenting with generative literature. The philosophy behind the API is to be as simple and intuitive as possible, while still providing adequate flexibility for more advanced users. The download comes in two flavors: 1) the 'core' package, containing the jar files and documentation, and (2) the 'TTS' package that adds text-to-speech support. Additionally, statistical models for tagging, chunking, and parsing are available for more advanced users (see 'Stat-Models'). RiTa optionally integrates with Processing and is both free and open-source.


SDPublisher: Scholarly Digital Publisher [version 1.1, September 2009]


Authors: Peter Robinson, Zeth Green and Andrew West


SDPublisher is the successor publication system to Anastasia. Like Anastasia, it sees XML as a stream as well as a hierarchy. It is therefore highly suited to processing documents characterized by multiple overlapping hierarchies: showing a book by pages, or by chapters, for example. Like Anastasia, it does not use XSLT, and is based on open source software. However, it is different from Anastasia in almost every other respect. It provides much better support for XML standards; it uses a database to enable dynamic representation of texts (Berkeley DB XML in the default configuration); it is not limited to Apache servers; it uses Python rather than TCL for scripting; it uses the Django framework in Python for elegant and efficient implementation of complex websites.

SDPublisher is built around 'Pixelise', an XML processing engine devised by Andrew West in 2008. Development of Pixelise was taken up by Zeth Green and Peter Robinson, and SDPublisher created around it, in late 2008/early 2009. Version 1.0 was released on May 5, 2009.



Semantic Interoperability of Metadata and Information in unLike Environments [2003-2008]


MIT Libraries and MIT Computer Science and Artificial Intelligence Laboratory


SIMILE seeks to enhance inter-operability among digital assets, schemata/vocabularies/ontologies, metadata, and services. A key challenge is that the collections which must inter-operate are often distributed across individual, community, and institutional stores. We seek to be able to provide end-user services by drawing upon the assets, schemata/vocabularies/ontologies, and metadata held in such stores.



Integrating Statistics and Visualization for Social Network Analysis [2006-present]


Human-Computer Interaction Lab, University of Maryland

SocialAction is a social network analysis tool that integrates visualization and statistics to improve the analytical process.


A Study of Professional Reading Tools for Computing Humanists [2006]


A report by Ray Siemens, John Willinsky, Analisa Blake, et al.

A good deal of the emerging research literature concerned with online information resources focuses on information retrieval, which is concerned with the use of search engines to locate desired information. Far less attention has been paid to how the found materials are read and how that critical engagement can be enhanced in online reading environments. This paper reports on a study examining the question of whether a set of well-designed reading tools can assist humanities computing scholars in comprehending, evaluating and utilizing the research literature in their area.


Text Analysis Portal for Research



TAPoR will build a unique human and computing infrastructure for text analysis across the country by establishing six regional centers to form one national text analysis research portal. This portal will be a gateway to tools for sophisticated analysis and retrieval, along with representative texts for experimentation. The local centers will include text research laboratories with best-of-breed software and full-text servers that are coordinated into a vertical portal for the study of electronic texts. Each center will be integrated into its local research culture and, thus, some variation will exist from center to center. The TAPoR (Text Analysis Portal) project is based at McMaster University, and consists of a network of six of the leading Humanities computing centres in Canada (Université de Montréal, McMaster University, University of Victoria, University of Alberta, University of New Brunswick, and University of Toronto).



The Text Encoding Initiative [1994-present]



The Text Encoding Initiative (TEI) is a consortium which collectively develops and maintains a standard for the representation of texts in digital form. Its chief deliverable is a set of Guidelines which specify encoding methods for machine-readable texts, chiefly in the humanities, social sciences and linguistics. Since 1994, the TEI Guidelines have been widely used by libraries, museums, publishers, and individual scholars to present texts for online research, teaching, and preservation. In addition to the Guidelines themselves, the Consortium provides a variety of supporting resources, including resources for learning TEI, information on projects using the TEI, TEI-related publications, and software developed for or adapted to the TEI.


TEI wiki

Authoring and Editing Tools


The number of tools for authoring and editing TEI documents is growing rapidly. While the TEI does not endorse any specific tools, it does maintain a contributory list of tools to help users learn about what is available and make informed choices.


Thinkmap SDK [version 2.8, 2012]


Thinkmap applications allow users to make sense of complex information in ways that traditional interfaces are incapable of. The Thinkmap SDK (v. 2.8) includes a set of out-of-the-box configurations for solving common visualization problems, as well as new visualization techniques for customizing data displays. Thinkmap visualizations can now be built rapidly using an XML-based configuration language.



A Humanist Format for Re-Usable Documents and Media [2007]


Author: Theodor Holm Nelson


What is literature?  Literature is (among other things) the study and design of documents, their structure and connections.  Therefore today's electronic documents are literature, electronic literature, and the question is what electronic literature people really need.

Electronic literature should belong to all the world, not just be hoarded by a priesthood, and it should do what people need in order to organize and present human ideas with the least difficulty in the richest possible form.

A document is not necessarily a simulation of paper.  In the most general sense, a document is a package of ideas created by human minds and addressed to human minds, intended for the furtherance of those ideas and those minds.  Human ideas manifest as text, connections, diagrams and more: thus how to store them and present them is a crucial issue for civilization.

The furtherance of the ideas, and the furtherance of the minds that present them and take them in, are the real objectives.  And so what is important in documents is the expression, reception and re-use of ideas.  Connections, annotations, and most especially re-use-- the traceable flow of content among documents and their versions-- must be our central objectives, not the simulation of paper.

Those who created today's computer documents lost sight of these objectives.  The world has accepted forms of electronic document that are based on technical traditions, and which cannot be annotated, easily connected or deeply re-used.  They impose hierarchy on the contents and ensnare page designers in tangles only a few can manage.

Theodor Holm Nelson


Treemap [version 4.1.1, February 2004]


Human-Computer Interaction Lab, University of Maryland

Treemap is a space-constrained visualization of hierarchical structures. It is very effective in showing attributes of leaf nodes using size and color coding. Treemap enables users to compare nodes and sub-trees even at varying depth in the tree, and help them spot patterns and exceptions.
Treemap was first designed by Ben Shneiderman during the 1990s.



Tree-based Graph Visualization [2005-2006]


Human-Computer Interaction Lab, University of Maryland


TreePlus is an interactive graph visualization system based on a tree-style layout. TreePlus transforms graphs into trees and shows the missing graph structure with visualization and interaction techniques. For example, TreePlus previews adjacent nodes, animates change of the tree structure, and gives visual hints about the graph structure. It enables users to start with a specific node and incrementally explore the graph.


variable media network / réseau des médias variables [2004]



The Variable Media Network proposes an unconventional preservation strategy based on identifying ways that creative works might outlast their original medium. This strategy emerged from the Guggenheim Museum’s efforts to preserve its world-renowned collection of conceptual, minimalist and video art. The growth of the Variable Media has been supported by the Daniel Langlois Foundation for Art, Science, and Technology, and subsequently promoted by the Forging the Future alliance. The aim of this diverse network of organizations is to develop the tools, methods and standards needed to rescue creative culture from obsolescence and oblivion.


Versioning Machine [version 4.0, June 2010]

A Tool for Displaying and Comparing Different Versions of Literary Texts


Digital Humanities Observatory, Royal Irish Academy

Author: Susan Schreibman


The Versioning Machine is a framework and an interface for displaying multiple versions of text encoded according to the Text Encoding Initiative (TEI) Guidelines. VM 4.0 has been updated to be P5 compatible. While the VM provides for features typically found in critical editions, such as annotation and introductory material, it also takes advantage of the opportunities afforded by electronic publication to allow for the comparison diplomatic versions of witnesses, and the ability to easily compare an image of the manuscript with a diplomatic version.

The Versioning Machine is also a tool for textual editors, providing an environment that allows editors to immediately see the consequences of their editorial decisions. The Versioning Machine can be used locally on a Mac or a PC, or it can be mounted on the WWW for public access. The documentation provided with the software not only provides information about the use of the software, but builds upon the Critical Apparatus chapter of the TEI Guidelines to give further guidance to those who wish to use this method of encoding.


Virtual Lightbox [version 2.0, February 2004]


MITH, Maryland Institute for Technology in the Humanities


The Virtual Lightbox is a software tool for comparing images online. Comparison, what John Unsworth calls a "scholarly primitive," is a basic and probably intuitive operation that is nonetheless not well supported--for images anyway--by conventional Web browser technology; that is, users have no ability to move, juxtapose, or otherwise reposition images beyond the configuration in which they are delivered by a static page layout. As rich image collections continue to come online, it's becoming increasingly apparent that end-users lack the tools to exploit such resources to their full potential. The Lightbox is one attempt to meet this need. Though its target audience is in the academic humanities and the library and museum community, we expect the Lightbox to find users far removed from this sphere; indeed, we anticipate it will be of interest to anyone for whom images constitute an important data type.


Visual Complexity [2008-present]


Editor: Manuel Lima

VisualComplexity.com intends to be a unified resource space for anyone interested in the visualization of complex networks. The project's main goal is to leverage a critical understanding of different visualization methods, across a series of disciplines, as diverse as Biology, Social Networks or the World Wide Web. I truly hope this space can inspire, motivate and enlighten any person doing research on this field.


Visual-Literacy.Org [2007-present]


Institute for Media and Communications Management, University of St. Gallen

Editor: Martin Eppler

The Visual-Literacy.org e-learning course will be used as an online leveling course as well as a blended skill-building course for students of fourteen different university courses in four universities (for more than 500 students). These courses require advanced analytical and conceptual visualization skills in order to transform abstract thought efficiently into graphic, tangible forms and to manage the topic complexity and the problems addressed in each class. Often in these courses, ranging from knowledge management to software engineering, the professors make references to key visualization principles and methods, without the necessary tools to actually develop these skills with the students. So far, teachers and students had to rely on fragmented, ad-hoc material to provide visualization principles and exercises. They had difficulties making students experience what kinds of challenges arise when knowledge needs to be visualized for successful communication, strategizing or engineering tasks. Although the topic of visualization is a very creative one, students so far only had limited possibilities to creatively explore visualization in their application domains. With the Visual-Literacy.org online tutorial, professors and teachers can flexibly revert to an important resource whenever a course relies on conceptual visualization competence.


Visual Thesaurus [version 3, 2010]


The Visual Thesaurus creates an animated display of words and meanings — a visual representation of the English language. The Thinkmap visualization places your word in the center of the display, connected to related words and meanings. You can then click on these words or meanings to explore further.



A Tool to Visualize and Quantify Web Sites using a Hierarchical Table of Contents [1997]


Human-Computer Interaction Lab, University of Maryland


WebTOC is a new tool developed at the University of Maryland Human-Computer Interaction Laboratory (HCIL) that gives a visualization of the contents of a Web site. It consists of two parts: the WebTOC Parser and the WebTOC Viewer. The Parser starts with a Web page and follows all the local links, generating a hierarchical representation of the documents local to the site. The Viewer displays this information as a Table of Contents (WebTOC) for the site using a standard Web browser.



World Wide Web Consortium [1994-present]



W3C primarily pursues its mission through the creation of Web standards and guidelines. Since 1994, W3C has published more than 110 such standards, called W3C Recommendations. W3C also engages in education and outreach, develops software, and serves as an open forum for discussion about the Web. In order for the Web to reach its full potential, the most fundamental Web technologies must be compatible with one another and allow any hardware and software used to access the Web to work together. W3C refers to this goal as “Web interoperability.” By publishing open (non-proprietary) standards for Web languages and protocols, W3C seeks to avoid market fragmentation and thus Web fragmentation.





Software originalmente concebido pelo programador David Cunnigham em 1995. Surgiram entretanto inúmeras aplicações com o mesmo princípio, desenvolvidas de forma colaborativa, predominantemente em regime de código aberto [open-source]. Os motores Wiki permitem aos utilizadores criar e editar livremente páginas web a partir de um navegador web. Os documentos wiki são gerados por uma linguagem de marcação simplificada. A Wikipedia, por exemplo, está em construção desde 2001 segundo o conceito materializado no software Wiki. Tinha, em Fevereiro de 2005, mais de 460 000 artigos, cerca de 77 milhões de palavras e 16 000 colaboradores registados, isto apenas na versão inglesa. São aí referidos como motores Wiki mais populares UseMod, TWiki, MoinMoin, PmWiki e MediaWiki.  Este conceito permite construir bases de conhecimento de forma colectiva e descentralizada, sem as habituais instâncias de controlo. O sistema Wiki está a modificar as formas de trabalho e colaboração em rede em muitos domínios. Tem sido adoptado no contexto do ensino e da investigação como ferramenta de produção e edição de informação que flexibiliza a correcção e actualização das páginas web, socializando o processo de autoria e de publicação. Além disso, a existência de registos cumulativos das alterações introduzidas nos documentos permite representar a história das transformações textuais. Constitui uma das formas de produção e gestão de informação que se tornará provavelmente mais generalizada no trabalho e no ensino futuros. [MP]


zotero [version 3.0, January 2012]


Center for History and New Media, George Mason University

Zotero is an easy-to-use yet powerful research tool that helps you gather, organize, and analyze sources (citations, full texts, web pages, images, and other objects), and lets you share the results of your research in a variety of ways. An extension to the popular open-source web browser Firefox, Zotero includes the best parts of older reference manager software (like EndNote)—the ability to store author, title, and publication fields and to export that information as formatted references—and the best parts of modern software and web applications (like iTunes and del.icio.us), such as the ability to interact, tag, and search in advanced ways. Zotero integrates tightly with online resources; it can sense when users are viewing a book, article, or other object on the web, and—on many major research and library sites—find and automatically save the full reference information for the item in the correct fields. Since it lives in the web browser, it can effortlessly transmit information to, and receive information from, other web services and applications; since it runs on one’s personal computer, it can also communicate with software running there (such as Microsoft Word). And it can be used offline as well (e.g., on a plane, in an archive without WiFi).




Valid CSS!| Valid XHTML 1.0 Transitional| Site Map | Contact | Updated 11 Feb 2013 | ©2005-2013 Manuel Portela