1 Introduction, Principles of d2d
XML is a de facto standard for the encoding of semi-structured text corpora. Its practical application ranges from mere technical configuration data to web sites with entertaining contents.
Both notions, “XML” and “text”, stand here for very different things: On one side the organization of the internal computable text model as a tree structure with its standardized update and retrieval methods (“W3C DOM”), and the family of tools operating on these (implementing “XSLT”, “XQuery”, etc.). On the other side its external representations: unicode text files containing a lot of “angled braces”, their decoding ruled by a historically grown collection of hardly understandable, non-compositional quoting rules.
All this hinders the creation of XML encoded texts in the creative flow of authoring. Neither are syntax controlled editors, which support tagging by menu driven selections, auto completion, automated coloring and indenting, a solution for all those authors which are used to “writing” as a creative, flowing, intuitive and intimate process, in direct contact with that mysterious thing called text.
So far XML appears inappropriate for this kind of authoring situation. Nevertheless often its use is highly desirable: Technical documentations, cookbooks, screen plays, song lyrics, scientific analyses, even multi-volume fantasy novels can profit extraordinarily by only little interspersed mark-up.
This is the starting point of the “d2d” project. It stands for “directly to document” or “direct document denotation”, hence pronounced “triple dee”, and it tries to close this gap. It is both, a text format which realizes XML mark-up in a very unobtrusive way, and it is a software system which implements parsing, translating, parser definitions, documentation, user guidance, etc.
It is based on a simple idea, the realization of which turned out to be surprisingly complex, and has been driven on by the authors for now more than ten years.111A rather early version is described in ; full documentation can be found at . The main characteristics are:
Simple way of writing and good readability by humans (without the need for any dedicated tool) as well as by machines.
All tags marked with one single, user-defined character.
Inference of (nearly all) closing tags.
Inference also of opening tags by a second, character-based level of parsers, used for small, highly structured data entities, interspersed in flow text.
Support of standard text format definition formats (e.g. W3C DTD).
Own language for text format definition (required at least for the character level parser definition). It employs free rewriting for parametrization and re-use of modules, and multi-lingual user documentation.
So d2d is a concept, a format and a software system which addresses domain experts and enables them to write XML compatible texts and potentially opens to them the whole world of XML based processing. D2d has been sucessfully employed in the very diverse fields of technical documentation, book keeping, web content creation, interactive music theory, etc.
2 The d2d Parsing Process
2.1 Principles of the d2d Parsing Process
The process of reading a text file, interpreting it as conforming to particular text format definition in the d2d format, and constructing the corresponding internal model, is called “D2d parsing”. This model can be written out as an external representation according to the XML standard . The implementation of the d2d tool also allows to process this model directly, e.g. using a collection of XSLT rules to derive other models to be written out. The parsing process is controlled by the chosen root element. Element definitions can have tagged or character content models:
In the tagged case, the resulting sub-tree of the model is constructed according to the tags appearing in the input text. Tags are marked by a single user-defined lead-in character, which defaults to “#”. Closing tags can in most cases be omitted, because the parser uses simply LL(1) strategy: Whenever an opening tag can be accepted on some currently open higher level in the result tree, all intervening closing tags are supplied automatically. Nevertheless explicit closing tags may be added to resolve ambiguities, to close more than one stack level, or to increase the readability of the source.
Character based parsers accept plain character data. All tagging is added automatically as defined in the applicable parser rules. The basic strategy is a non-deterministic longest match. Thus this mechanism performs well for short input data, e.g. some ten lines of MathML. In practice this covers most instances of structured entities, interspersed in flow text. Besides, non-deterministic rules are much easier definable by computer language laymen.
2.2 File Sections
Every file to be processed by the d2d tool may start with sections containing local definitions. These have the form
(In this paragraph “␣” stands for non-empty sequences of whitespace and newlines.) is a prefix not containing such a section, and will be discarded totally. This allows d2d input to be contained in arbitrary documents, like e-mails etc.
must be a valid module definition in the ddf format (see Section 3). It will be parsed and the contained definitions can be used immediately in the following text corpus.
Zero to many such local definition sections can be contained in an input file. Finally it has to follow either
In this case is the name of a module, and is the name of a tag parser definition from . This is used as the topmost element for the document structure to be parsed and thus defines the initial state of the parsing process. The other possibility is
In this case an XSLT source will be parsed, and the module and tag do identify the top-level element of the output to be generated by the XSLT code.
In both cases, the rest of the file immediately after the “” is the text corpus input, fed to the d2d parser, up to a final explicit “#eof”.
The function in Table 1 defines the next step for processing the text corpus data (not the local module definitions), namely converting the stream of characters into a stream of tokens. The comment lead-in character “ ” and the command character “” can be re-defined by the user, and default to “/” and “#”, resp. The tokenization process is defined by applying a longest prefix match discipline to the transformation rules given for “”. The closing and empty tags with three slashes mark those elements which are intentionally left incomplete by the user. The reaction of the tools in these cases is configurable.
The tokenization level supports a limited set of one character parentheses, as known from sed’s “s%...%...%” and LaTeX’s “\verb%...%” syntax. It is unrelated to the parser level, which can cause funny effects, but which nevertheless has turned out to be the cleanest way to define.
2.4 Tag Based Parsing
The second step, parsing, is to convert a sequences of tokens from to a single node from , as defined by the function in Tables 2 and 3.222 The formulas in this tables have been published in . This node represents the top-most element of the resulting document model, which can later be shipped out to standard XML file format, or further processed by XSLT, as soon as completed.
Parsing always starts in tag mode, i.e. looking for explicit tags “” in the token stream. Character data is treated as if tagged with an implicit pseudo-tag “”. The tag parsing process is a stack-controlled recursive descent parser. Whenever a tag is consumed, a new stack level from is possibly added and the corresponding content model is made the new accepting state machine. This is performed by the function , which delivers a new stack prefix. Its definition is comparably simple, since (a) it is only called when the next tag is contained in , and (b) all sets in all alternatives are disjoint.
The stack levels represent the choice points at which the parsing process can later be continued. Whenever a tag (opening or closing tag) is reached which cannot be consumed in the current state, the stack is unwound in search for the first possibility by functions and . Only if such is found, all intermediate stack frames are closed, and all material collected there is packed into objects. (The third parameter of the functions is an accumulator for these; finally their sequence is wrapped in the highest closed and appended to the contents of the parent element’s ). If some non-optional content is missing in the closed frames, an error message element “missing()” is synthesized and inserted into the resulting model. If no such frame is found, the input is ignored, the stack left unchanged, and a “skipped()” error message is inserted, instead.
Due to this feature, the function is always total, an important feature when addressing domain experts, who are not language experts. An interesting philosophical question is the definition of the content model reported by (): E.g. when the original syntax requires something like “(a|b)?, x, d+”, which is not matched by the input, then minimally only “x,d” is required (in the strict sense of the word) to make the input complete. Nevertheless we decided to report the subexpression from the original structure definition as a whole, to make the error more easily locatable by the user.
2.5 Parsing of XSLT Sources
The algorithm is slightly enhanced to parse XSLT sources. In these, elements (and attributes) of the XSLT language and those of the target language appear intermingled. Again, we want to write with least noise, and both categories thus must be recognized automatically, as far as possible. These measures are taken:
Basically, the XSLT language and the target language must be provided as text structure definitions. They will be parsed by switching between the corresponding state machines, in the style of “co-routines”.
All reachable elements of the target language are collected, and all XSLT elements which are allowed to contain target language elements.
Whenever parsing the contents of an element from the latter set, an opening tag of the former set may appear, additionally to the normal parsing as controlled by the XSLT grammar state machine.
Vice versa, dedicated (“productive”) XSLT elements can contain anywhere in a target element. Whenever this happens, the parsing process is switched to “weak mode”, a variant, in which every target content expression is allowed to additionally match the empty string, as if decorated with a “ ? ”.
A more promising approach will be the integration of Fragmented Validation (FV). This is a technique of parsing the result fragments in an XSLT source by a non-deterministic parser which follows all possible situations in parallel. It has been presented in , but not yet integrated into the d2d tool.
The tags of the target language have priority over the same tags from XSLT. For these prefixed aliases are generated automatically, whenever necessary.
In practice it turned out that this format for writing down XSLT sources allows a work-flow nearly as with a dedicated programming language front-end.
2.6 Character Based Parsing
Whenever the opening tag of an element has been consumed which is declared as a character parser, the text input is redirected to the parsing process as defined in Table 4. Character based parsers have non-deterministic semantics: A set of hypotheses is maintained in parallel, and at the end the longest matched prefix is delivered.
The first operator special for the character level is “ ~ ’’, which is a sequential composition without intervening whitespace. This hold also for the repetition operators ‘‘ ~* ’’ and ‘‘ ~+ ’’ The operator ‘‘ , ’’ is taken over from the tag level, for convenience, and means sequential composition with arbitrary intervening whitespace. The operator ‘‘ & ’’ for permutation is currently not supported on character level. The operator ‘‘ > ’’ defines a greedy sub-expression, where non-determinism is overruled by longest prefix matching. The operators ‘‘ ~* ’’ and ‘‘ ~* ’’ are greedy anyhow, when they are applied to plain character sets.
The generated and output XML expression will be one single element, with the parser’s identifier as its tag, if no structure is defined. This is done by nesting the constructs ‘‘’’, which generate an element with as its tag and the parsed contents. This again is simply the parsed character data, if no such constructs are contained recursively, or the sequence of the resulting XML elements, otherwise.
Immediately after the character parser cannot be continued, control returns to the tag parsing level. Now an explicit closing tag for the parser may follow, but is never required.
3 Text Format Definitions
The genuine d2d text structure definition language ‘‘ddf’’ supports (a) the definition of the element content models, as tag or character parsers, plus (b) various additional parameters.
The content models follow basically the same design principles as known from W3C DTD  or relaxNG . Differences are the ‘‘ & ’’ operator, which does not stand for interleaving (as in relaxNG), but only for permutation. New is the ‘‘ @ ’’ operator, which inserts the content model of the referred definition into any expression, allowing a definition to act as element specification and as mere internal constant (with an expression as its value).
Each definition may carry attributes related to very different layers: The XML tag can be set, overriding the identifier of the definition, which is the default; an XML namespace URI can be defined; different formats for editing and parsing can be specified; definitions can open a local scope for tags; etc.
More importantly: User documentation in different languages can be attached to every definition, and XSLT rules for transformation into different back-ends. Both features employ d2d recursively to document or process itself: The former employs some standard format for technical documentation, readable by humans; the latter employs the XSLT source format, as described above, instantiated with the text structure definition of the target format.
All definitions are organized in a hierarchy of modules; each top module must be locatable by some rules matching the module name to a file location, or sim.
Modules can be imported into other modules. Thereby an ‘‘import key’’ is defined, which used as a prefix makes the definitions in the imported module accessible in all expressions in the importing module. This includes ‘‘automatic re-export’’: the imports in the imported module are accessible by concatenating these import keys, etc.
Additionally, substitutions can be defined which apply to all expressions in the imported module (or only the expression of one particular definition). Each such substitution replaces a particular reference, i.e. a sequence of identifiers meant as a reference to a definition, by a given expression, evaluated in the context of the importing module. Furthermore, a particular module import in the imported module can be replaced by a different one, defined in the importing module, as a whole. This selection of mechanisms for parametrization has turned out to be very powerful and adequate when maintaining and developing mid-scale text structure architectures, like the examples listed at the beginning.
-  Tim Bray, Jean Paoli, C.M. Sperberg-McQueen, Eve Maler, Fran cois Yergeau, and John Cowan. Extensible Markup Language (XML) 1.1 (Second Edition). W3C, http://www.w3.org/TR/2006/REC-xml11-20060816/, 2006.
-  Clark and Murata. Document Schema Definition Language (DSDL) -- Part 2: Regular-grammar-based validation -- RELAX NG. ISO/IEC, http://standards.iso.org/ittf/PubliclyAvailableStandards/c052348_ISO_IEC_19757-2_2008(E).zip, 2008.
Markus Lepper and Baltasar Trancón y Widemann.
d2d --- a robust front-end for prototyping, authoring and maintaining
XML encoded documents by domain experts.
In Joaquim Filipe and J.G.Dietz, editors,
Proceedings of the International Conference on Knowledge Engineering and Ontology Delelopgment, KEOD 2011, pages 449--456, Lisboa, 2011. SciTePress.
-  Markus Lepper, Baltasar Trancón y Widemann, and Jacob Wieland. Minimze mark-up ! -- Natural writing should guide the design of textual modeling frontends. In Conceptual Modeling --- ER2001, volume 2224 of LNCS. Springer, November 2001.
-  Markus Lepper and Baltasar Trancón y Widemann. Fragmented validation --- a simple and efficient contribution to xslt checking (extended abstract). In Proc. ICMT 2013, Int. Conference on Theory and Practice of Model Transformations, volume 7909 of LNCS. Springer, 2013.
-  Baltasar Trancón y Widemann and Markus Lepper. The BandM Meta-Tools User Documentation. http://bandm.eu/metatools/docs/usage/index.html, 2010.