Personal tools
You are here: Home Publikationen

Publikationen nach Typ

Document Actions


Hoernecke, J, Amelung, M, Krieger, K, and Rösner, D (2011)

Flexibles E-Assessment mit OLAT und ECSpooler

Der Web-Service ECSpooler bietet eine flexible Möglichkeit, E-Assessment-Funktionalitäten für Programmieraufgaben in bestehende Learning Management Systeme (LMS) zu integrieren. Bisher existierten als Frontends für diesen Service das Plone-Produkt ECAutoAssessmentBox sowie ein stand-alone Java-Client. Das LMS OLAT bietet neben üblichen Testmöglichkeiten, z. B. in Form von Multiple-Choice- Tests oder Lückentexten, bisher keine Möglichkeit, auch Lösungen zu typischen Aufgabenstellungen in der Informatikausbildung automatisch zu überprüfen. In diesem Beitrag wird gezeigt, wie OLAT um diese E-Assessment-Möglichkeit erweitert werden kann. OLAT agiert als weiteres Frontend für ECSpooler und ist somit in der Lage, die flexiblen Möglichkeiten des Web-Service zur automatischen Überprüfung von Programmieraufgaben zu nutzen.

Otto, M, Friesen, R, and Rösner, D (2011)

Message Oriented Middleware for Flexible Wizard of Oz Experiments in HCI

Wizard of Oz (WOZ) systems and WOZ experiments are an important tool for basic and applied research in HCI. We report about using SEMAINE as a flexible component based middleware with a loose coupling of components as software infrastructure for WOZ experiments in human companion interaction. We focus on our experimental WOZ designs, their realisation within the SEMAINE framework and lessons learned from deploying the implemented solutions as the basis for ongoing controlled experiments with 120 subjects.

Rösner, D, Friesen, R, Otto, M, Lange, J, Haase, M, and Frommer, J (2011)

Intentionality in Interacting with Companion Systems -- an Empirical Approach

We report about a WOZ experiment with a carefully designed scenario that allows to investigate how users interact with a companion system in a mundane situation with the need for planning, re-planning and strategy change. The data collection from the experiments comprises multimodal records (audio, video, biopsychological parameters) and transcripts of the verbal interaction, and all subjects fill out a battery of well established psychometric questionnaires about various aspects especially of their personality. This will allow to correlate observed behaviour and detected affects and emotions with measured aspects of the personality of subjects and is expected to serve as a basis for defining a typology of users. In addition, a subgroup of the subjects takes part in semiformal in-depth interviews that focus on retrospective reflexion of the users’ subjective experience during the experiments and especially on the intentionality that users ascribed to the system during the course of interaction.

Flexibles E-Assessment auf Basis einer Service-orientierten Architektur

Mit den eduComponents liegt seit mehreren Semestern eine Sammlung von Komponenten vor, die ein quelloffenes allgemeines Content-Management-System um Funktionen zur Unterstützung von Lernprozessen erweitern. In diesem Beitrag stellen wir insbesondere die Komponenten zur automatischen Überprüfung von studentischen Einreichungen zu (Programmier-)Aufgaben vor und beschreiben die zugrunde liegende Systemarchitektur. Weiterhin wird eine Vorgehensweise zur Spezifikation neuer Testkomponenten (z. B. für bisher nicht unterstützte Programmiersprachen) erläutert, mit denen das Systems einfach und flexibel erweitert werden kann. Darüber hinaus gehen wir auf den praktischen Einsatz des Systems ein und prasentieren die in unseren Lehrveranstaltungen des Wintersemesters 2008/2009 und Sommersemesters 2009 gesammelten Erfahrungen.

Adaptive Dialogue Management in the NIMITEK Prototype System

The primary aim of this paper is to present the implementation of adaptive dialogue management in the NIMITEK prototype spoken dialogue system for supporting users while they solve problems in a graphics system (e.g., the Tower-of-Hanoi puzzle). The central idea is that the system dynamically refines a dialogue strategy according to the current state of the interaction. We analyze a recorded dialogue between the user and the prototype system that took place during the testing of the system. It illustrates several points of the implemented dialogue strategy: processing of user’s commands, supporting the user, and multilingual working mode.

Amelung, M and Rösner, D (2008)

Experiences in Hybrid Learning with eduComponents

Since five years we practice hybrid learning in all our courses by combining classroom lectures and group exercises with Web-based e-learning. In this paper we reflect the experiences with our learning environment and discuss the changes in teaching and learning that resulted from the new approach as well as pedagogical concerns and policy issues.

Emotion Adaptive Dialogue Management in Human-Machine Interaction

Although we witness the rapid increase of research interest in affected user behavior, it still turns out to be a challenge for developers of spoken dialogue systems. Researches in this domain are usually primarily concentrated on the detection of emotional user behavior. However, less attention is devoted to another important research question – how to enable dialogue systems to overcome problems in the interaction related to affected user behavior. This paper addresses this research question. The aim of this paper is twofold. First, it introduces an approach to achieve emotion adaptive dialogue management in human-machine interaction. The focal point of this approach is placed on dynamic definition of appropriate dialogue strategies aimed to support the user to overcome problems occurring in the interaction. Second, this paper reports the implementation of a dialogue management module incorporated in the NIMITEK prototype spoken dialogue system.

On the Role of the NIMITEK Corpus in Developing an Emotion Adaptive Spoken Dialogue System

This paper reports on the creation of the multimodal NIMITEK corpus of affected behavior in human-machine interaction and its role in the development of the NIMITEK prototype system. The NIMITEK prototype system is a spoken dialogue system for supporting users while they solve problems in a graphics system. The central feature of the system is adaptive dialogue management. The system dynamically defines a dialogue strategy according to the current state of the interaction (including also the emotional state of the user). Particular emphasis is devoted to the level of naturalness of interaction. We discuss that a higher level of naturalness can be achieved by combining a habitable natural language interface and an appropriate dialogue strategy. The role of the NIMITEK multimodal corpus in achieving these requirements is twofold: (1) in developing the model of attentional state on the level of user’s commands that facilitates processing of flexibly formulated commands, and (2) in defining the dialogue strategy that takes the emotional state of the user into account. Finally, we sketch the implemented prototype system and describe the incorporated dialogue management module. Whereas the prototype system itself is task-specific, the described underlying concepts are intended to be task-independent.

Amelung, M, Forbrig, P, and Rösner, D (2008)

Towards Generic and Flexible Web Services for E-Assessment

In computer science education, exercise courses and/or lab practices are essential for the learning effect since they provide opportunities for students to apply their theoretical knowledge to practical problems. The automatic testing and assessment of assignments in a Web-based environment offers students more learning possibilities (e.g., time and location-independent) with immediate feedback and helps teachers to reduce their workload so they can concentrate on issues regarding content and didactics. In this paper we present a generic, flexible, and reusable Web-based system architecture and its implementation for automatic testing of programming assignments and assignments in other formal systems. We also describe our practical experience gathered with this approach in computer science courses at two different universities.

The NIMITEK Corpus of Affected Behavior in Human-Machine Interaction

This paper presents the NIMITEK corpus of affected behavior in human-machine interaction. It contains 15 hours of audio and video recordings produced during a refined Wizard-of-Oz (WOZ) experiment designed to induce emotional reactions. Ten native German speakers participated in the experiment. The language used in the experiment was German. During the process of collecting the corpus proper attention was devoted to the issue of its ecological validity. Besides the fact that the refined WOZ simulation gave the opportunity to control development of the dialogue, the problem of role-playing subject was also successfully addressed. The evaluation of the corpus with respect to its emotional content demonstrated a satisfying level of ecological validity. We summarize evaluation results in the following points. The corpus contains recordings of genuine emotions that were overtly signaled. It is not oriented to extreme representations of a few emotions only but comprises also expressions of less intense emotions. Emotional expressions of diverse emotions are extended in modality (voice and facial gesture) and time. In addition, different classes of non-neutral talking style are marked in the obtained data.

Gnjatović, M, Manuela, K, Zhang, X, Frommer, J, and Rösner, D (2008)

Linguistic Expression of Emotion in Human-Machine Interaction: The NIMITEK Corpus as a Research Tool

Since the end of 2005, the NIMITEK consortium investigates issues in spoken human-machine interaction (HMI). We employ the multimodal NIMITEK corpus of affected behaviour in HMI as a tool that provides an empirical foundation for the development of integrated emotion detection from both signal based as well as content based emotion recognition. This paper primarily discusses various linguistic features that may carry affect information (e.g., key words and phrases, lexical cohesive agencies, dialogue act sequences, etc.). Finally, we report on a first prototype of an automatic annotator for recognition and tracking of the user’s emotional state from linguistic information.

A Sustainable Learning Environment based on an Open Source Content Management System

This paper presents our approach for supporting face-to-face courses with software components for e-learning based on a general-purpose content management system (CMS). These components—collectively named eduComponents—can be combined with other modules to create tailor-made, sustainable learning environments, which help to make teaching and learning more efficient and effective. We give a short overview of these components, and we report on our practical experiences with the software in our courses.

eduComponents: A Component-Based E-Learning Environment

We present the eduComponents, a component-based approach to e-learning system architecture. In contrast to typical “integrated” platforms, the eduComponents are implemented as extension modules for a general-purpose content management system (CMS). The components can be used individually, together, and in combination with other modules. The use of a general-purpose (i.e., not e-learning-specific) CMS means that a single platform can be used for e-learning and other Web content, providing the advantages of a uniform user interface, reduced system administration overhead, and extensive code reuse.

Piotrowski, M and Fenske, W (2007)

Interoperabilität von elektronischen Tests

Die Erstellung qualitativ hochwertiger Tests ist aufwändig. Daher ist es wünschenswert, einmal erstellte Tests wieder- und weiterverwenden zu können. Um eine Abhängigkeit von einer einzelnen Testplattform zu vermeiden, werden standardisierte Austauschformate benötigt. In diesem Beitrag formulieren wir Desiderata für derartige Formate und untersuchen den derzeitigen De-Facto-Standard, die IMS Question & Test Interoperability Specification (QTI), auf seine Eignung. Das erklärte Ziel von QTI ist es, den Austausch von Tests zwischen verschiedenen Systemen zu ermöglichen. Nach der Analyse der Spezifikation und aufgrund unserer Erfahrungen bei der Implementierung von QTI im System ECQuiz kommen wir zu dem Schluss, dass QTI jedoch als Austauschformat ungeeignet ist.

An approach to processing of user’s commands in human-machine interaction

In spoken human-machine interaction users often produce ”irregular” (e.g., elliptical or minor, etc.) utterances. Forcing them to always produce complete utterances would be too restrictive. Thus, there is a need to develop structures and algorithms that support system’s decision making processes when it is confronted with such user inputs. This paper proposes an approach to processing of user’s commands in human-machine interaction. Attentional information is already recognized as crucial for processing of utterances in discourse. We model attentional information on the level of a user’s command and introduce rules for transition of the focus of attention. Although we also report an implementation for a task-specific scenario, the described modeling method and the introduced rules are intended to be task-independent.

Large-scale Computer-Assisted Assessment in Computer Science Education: New Possibilities, New Questions

Since 2003 we have successively introduced the use of e-learning and computer-assisted assessment (CAA) components into all of our courses, namely online multiple-choice tests, electronic submission of assignments, and automatic testing of programs. We originally did not intend to make major changes to the courses; our primary motivation was just to make them more efficient and more effective by freeing teachers from administrative burdens and by offering more flexibility and interactivity for students. After several semesters of usage we have noticed, however, that the courses have changed much more radically than originally envisaged. The electronic support of face-to-face courses offers many new possibilities, but it also opens up new questions. This paper describes our system and our experience, and discusses some of the questions we have encountered.

A Dialogue Strategy for Supporting the User in Spoken Human-Machine Interaction

Spoken human-machine interaction supported by state-of-the-art dialogue systems is still related to various problems that might occur in the communication. This paper addresses some aspects of design and implementation of dialogue strategies aimed to support the user to overcome such problems. We discuss which knowledge resources related to the interaction should be taken into account in order to make the system able to select and apply an appropriate dialogue strategy. We consider four interaction features: the focus of attention, the state of the task, the state of the user and the history of interaction. In addition, we report the design and the implementation of a dialogue strategy that is related to the computerized version of the Tower-of-Hanoi puzzle. We show how the interaction features can be engaged in order to conclude when to provide support to the user, what kind of support should be provided, and how to provide it.

Processing Dialogue-Based Data in the UIMA Framework

Processing dialog based data requires handling of different kinds of data (e.g. video, audio, and text). This paper presents our experiences using the UIMA framework to process dialogs from the NIMITEK corpus.

Webbasierte Dienste für das E-Assessment

Zum Erwerb von Programmierfähigkeiten ist neben einem theoretischen Verständnis vor allem praktische Übung notwendig. Die automatische Überprüfung von Programmieraufgaben hilft, Studierenden mehr Übungsmöglichkeiten mit schnellerer Rückmeldung zur Verfügung zu stellen und Lehrende gleichzeitig zu entlasten, so dass sie sich auf inhaltliche und didaktische Fragen konzentrieren können. Wir stellen eine schlanke, dienstebasierte Architektur und ihre Implementierung für die automatische Überprüfung von studentischen Einreichungen vor und berichten über praktische Erfahrungen beim Einsatz dieser Software in unseren Lehrveranstaltungen.

Tactical, document-oriented e-learning components

Most university e-learning strategies mandate the use of a centralized university-wide learning platform. The learning management systems typically employed in this function are “integrated” platforms, i.e., large-scale systems providing most common e-learning functions in a single application. There are,however, a number of issues with this type of systems: Due to their size and complexity they can be difficult and expensive to operate and administrate; and we feel that they are not flexible enough to allow teachers to make tactical decisions; and, since these systems cannot be used for the management of “normal” Web sites, they separate learning content from other content and duplicate functionality and administration. This paper presents an alternative approach: E-learning components that extend a general-purpose content management system with e-learning functionality, enabling the use of a single platform for learning and non-learning content and the creation of tailor-made e-learning environments.

EduComponents: Experiences in E-Assessment in Computer Science Education

To reduce the workload of teachers and to improve the effectiveness of face-to-face courses, it is desirable to supplement them with Web-based tools. This paper presents our approach for supporting computer science education with software components which support the creation, management, submission, and assessment of assignments and tests, including the automatic assessment of programming exercises. These components are integrated into a general-purpose content management system (CMS) and can combinde with other components to create tailored learning environments. We describe the design and implementation of these components, and we report on our practical experience with deploying the software in our courses.

Gathering Corpora of Affected Speech in Human-Machine Interaction: Refinement of the Wizard-of-Oz Technique

The primary aim of this paper is to address the methodological desiderata in obtaining a corpus of affected speech in human-machine interaction. We propose requirements that are to be met in order that a Wizard-of-Oz scenario designed to elicit affected speech could result in ecologically valid data. In addition, we report about the Wizard-of-Oz experiment conducted in the framework of the NIMITEK project. This project has the primary goal to investigate the role of emotions and intentions in human-machine dialogue. In the first phase of the experiment the focus was on prosody as a means to communicate emotional state.

E-Learning-Komponenten zur Intensivierung der Übungen in der Informatik-Lehre – ein Erfahrungsbericht

Übungen sind ein zentrales Element in der Informatiklehre. Ausgehend von didaktischen Überlegungen, wie der Übungsbetrieb durch Komponenten des ELearning, insbesondere durch Formen des Computer-Aided Assessment, intensiviert und effizienter gestaltet werden kann, haben wir die EduComponents entwickelt. Dabei handelt es sich um eine Sammlung von Erweiterungsmodulen, die ein allgemeines CMS (Plone) um E-Learning-Funktionalität ergänzen. Seit mehreren Semestern werden diese frei verfügbaren Module sowohl in allen Lehrveranstaltungen unserer Arbeitsgruppe als auch an anderen Institutionen erfolgreich eingesetzt.

Integration von E-Assessment und Content-Management

Formative Tests können für Lehrende und Lernende gleichermaßen nützlich sein. Webbasierte Multiple-Choice-Tests können helfen, den Aufwand für formative Tests zu senken und somit einen breiteren und häufigeren Einsatz zu ermöglichen. Wir stellen ein Modul für das Content-Management-System Plone vor, das es erlaubt, MC-Tests genau wie andere Ressourcen einzusetzen und zu verwalten. Auf dieseWeise können vor allem in Präsenzveranstaltungen, für die üblicherweise keine Lernplattform verwendet wird, Tests eng mit den anderen online verfügbaren Lehr- und Lernmaterialien (z. B. Vorlesungsskripten oder Aufgabenblättern) verknüpft werden. Das Modul erlaubt auch den Import und Export von Aufgaben gemäß IMS QTI; in diesem Zusammenhang diskutieren wir auch unsere Erfahrungen mit dieser Spezifikation.

Rösner, D and Amelung, M (2005)

A Web-based Environment to Support Teaching of Programming Paradigms

We report on our approach to employ the WWW in order to support lecture room teaching of programming paradigms (e.g. functional, logical, and object-oriented) by means of interactive web-based tools for the students: immediate feedback on interactively submitted solutions of programming tasks (for e.g. Haskell and Scheme), structured submissions of induction proofs and interactive multiple choice questionnaires.

LlsChecker - ein CAA-System für die Lehre im Bereich Programmiersprachen

Wir berichten über Entwurf, Implementierung und Einsatz des Systems LlsChecker. LlsChecker ist eine in ein Content-Management-System (CMS) für Lehr- und Lernmaterialien integrierte Komponente zur automatischen Überprüfung studentischer Lösungen für Programmieraufgaben in unterschiedlichen funktionalen Programmiersprachen. Das System ist so generisch organisiert, dass die Ausweitung der Dienste auf weitere Sprachen – zumindest für funktionale Programmiersprachen – allein durch eine XML-basierte Deklaration möglich ist.

Kruse, PM, Naujoks, A, Rösner, D, and Kunze, M (2005)

Clever Search: A WordNet Based Wrapper for Internet Search Engines

This paper presents an approach to enhance search engines with information about word senses available in WordNet. The approach exploits information about the conceptual relations within the lexical-semantic net. In the wrapper for search engines presented, WordNet information is used to specify a user's request or to classify the results of a publicly available web search engine, like Google, Yahoo, etc.

In diesem Beitrag wird ein Ansatz vorgestellt, der auf der Grundlage der verfügbaren Informationen in WordNet die Ergebnisse von herkömmlichen Suchmaschinen verbessert. Es werden hierzu die konzeptuellen Relationen des lexikalischen-semantischen Netzes genutzt. Der beschriebene Suchmaschinenaufsatz nutzt WordNet-Informationen um Nutzeranfragen zu spezifizieren und um die gefundenen Webseiten der herkömmlichen Suchmaschinen (Google, Yahoo etc.) zu klassifizieren und zu gruppieren.

Kunze, M and Rösner, D (2005)

Transforming Business Rules Into Natural Language Text

The aim of the project presented in this paper is to design a system for an NLG architecture, which supports the documentation process of eBusiness models. A major task is to enrich the formal description of an eBusiness model with additional information needed in an NLG task.

Xiao, C and Rösner, D (2004)

A Detection Algorithm for Multiword Verbs in the English Sub-language of MEDLINE Abstracts

In this paper, we investigate the multiword verbs in the English sub-language of MEDLINE abstracts. Based on the integration of the domain-specific named entity knowledge and syntactic as well as statistical information, this work mainly focuses on how to evaluate a proper multiword verb candidate. Our results present a sound balance between the low- and high-frequency multiword verb candidates in the sub-language corpus. We get a F-measure of 0.753, when tested on a manual sample subset consisting of multiword candidates with both low- and high-frequencies.

Kunze, M and Rösner, D (2004)

Context Related Derivation of Word Senses

Real applications of natural language document processing are very often confronted with domain specific lexical gaps during the analysis of documents of a new domain. This paper describes an approach for the derivation of domain specific concepts for the extension of an existing ontology. As resources we need an initial ontology and a partially processed corpus of a domain. We exploit the specific characteristic of the sublanguage in the corpus. Our approach is based on syntactical structures (noun phrases) and compound analyses to extract information required for the extension of GermaNet's lexical resources.

Kunze, M and Rösner, D (2004)

Corpus based Enrichment of GermaNet Verb Frames

Lexical semantic resources, like WordNet, are often used in real applications of natural language document processing. For example, we integrated GermaNet in our document suite XDOC of processing of German forensic autopsy protocols. In addition to the hypernymy and synonymy relation, we want to adapt GermaNet's verb frames for our analysis. In this paper we outline an approach for the domain related enrichment of GermaNet verb frames by corpus based syntactic and co-occurred data analyses of real documents.

Rösner, D, Dürer, U, Esperer, H, Moore, A, Parr, G, Logan, M, and Zieger, K (2000)

An XML and Ontology based Methodology and Authoring Environment for Medical Information Systems


E-Assessment as a Service

Assessment is an essential element in learning processes. It is therefore not unsurprising that almost all learning management systems (LMSs) offer support for assessment, e. g., for the creation, execution and evaluation of multiple choice tests. We have designed and implemented generic support for assessment that is based on assignments that students submit as electronic documents. In addition to assignments that are graded by teachers we also support assignments that can be automatically tested and evaluated, e. g., assignments in programming languages or other formal notations. In this paper we report about the design and implementation of a service-oriented approach for automatic assessment of programming assignments. The most relevant aspects of our “assessment as a service” solution are that on the one hand the advantages of automatic assessment can be used with a multitude of programming languages as well as other formal notations (as so called backends); on the other hand the features of these types of assessment can be easily interfaced with different existing learning management systems (as so called frontends). We also report about the practical use of the implemented software components at our university and other educational institutions.

Kunze, M and Rösner, D (2004)

Issues in Exploiting GermaNet as a Resource in Real Applications

This paper reports about experiments with GermaNet as a resource within domain specific document analysis. The main question to be answered is: How is the coverage of GermaNet in a specific domain? We report about results of a field test of GermaNet for analyses of autopsy protocols and present a sketch about the integration of GermaNet inside XDOC. Our remarks will contribute to a GermaNet user's wish list.

Rösner, D, Kunze, M, and Krötzsch, S (2004)

Transforming and Enriching Documents for the Semantic Web

We suggest to employ techniques from Natural Language Processing (NLP) and Knowledge Representation (KR) to transform existing documents into documents amenable for the Semantic Web. Semantic Web documents have at least part of their semantics and pragmatics marked up explicitly in both a machine processable as well as human readable manner. XML and its related standards (XSLT, RDF, Topic Maps etc.) are the unifying platform for the tools and methodologies developed for different application scenarios

Bücher und Buchbeiträge

Kunze, M and Rösner, D (2004)

XDOC - Extraktion, Repräsentation und Auswertung von Informationen mit einer XML-basierten Document Suite

In vielen Bereichen existieren große elektronisch verfügbare Dokumentbestände. Viele Anwender würden diese Dokumentbestände gerne für unterschiedliche Zwecke automatisch auswerten. Bisher stehen ihnen dafür aber nur wenige komfortable Werkzeuge und kaum linguistische Ressourcen, insbesondere auch für die Sprache Deutsch, zur Verfügung. Hier setzen die Arbeiten am Projekt XDOC an. Die potentiellen Nutzer des Systems sind Fachexperten, d.h. Ingenieure, Mediziner, Wirtschaftswissenschaftler u.a., die gemeinsam haben, dass sie sowohl große Dokumentbestände besitzen als auch ein starkes Interesse an ihrer Auswertung haben. Die Auswertung wird typischerweise in einer explorativen Weise erfolgen, d.h. beim Umgang mit den Dokumentbeständen selbst tauchen weitere zunächst nicht antizipierte Fragen auf. Die Nutzungsschwelle soll so niedrig wie möglich sein, d.h. Anwender sollen mit ihren Experimenten beginnen können, ohne vorher erst aufwendige Vorarbeiten z.B. für den Lexikonaufbau leisten zu müssen. Dies hat Konsequenzen für die Robustheit der Werkzeuge. Diese sollen insbesondere mit lexikalischen Lücken umgehen können. Die Arbeitsumgebung soll den Nutzern möglichst vertraut sein. Hier bietet sich eine an WWW-Browsern orientierte Benutzerschnittstelle an. Beispiele für derzeit bearbeitete Anwendungsszenarien sind: o Die Auswertung von Dokumentbeständen mit technischem Wissen im Sinne einer Unterstützung der Formalisierung und Wissensakquisition. o Die Auswertung von Beständen an Obduktionsprotokollen. o Die Auswertung von WWW-Seiten mit Informationen zu Firmen und Erstellung von standardisierten Firmenprofilen. Allen Anwendungen ist gemeinsam, dass sie Aufgaben umfassen, die nicht nur computerlinguistischer Natur sind, sondern auch Dokumentverarbeitung im weiteren Sinne erfordern. Hierzu gehören Fragen der Speicherung und Repräsentation von Dokumenten, der effizienten Algorithmen zum Umgang mit Dokumentbeständen und der Präsentation von Analyseergebnissen und internen Strukturen. Zusammengefasst lässt sich der vorgestellte Ansatz so charakterisieren: XML wird als einheitlicher Formalismus für das System verwendet, d.h. alle Module erwarten XML-Dokumente als Eingaben und liefern ihre Ergebnisse im gleichen Format, alle Ressourcen sind in XML codiert und auch Prozessinformationen werden einheitlich mit XML dargestellt. Dies bringt den Vorteil, dass für immer wiederkehrende Aufgaben im Umgang mit XML-Datenstrukturen wiederverwendbare, universell einsetzbare Module konzipiert und implementiert werden konnten. Das in diesem Beitrag vorgestellte XML-basierte System die Document Suite XDOC (XML basiertes Document Processing) kombiniert verschiedene Ansätze aus der Computerlinguistik, z.B. POS-Tagging, syntaktische Analysen, aber auch Ansätze zur semantischen Analyse. In den nachfolgenden Abschnitten wird das in der Document Suite verwirklichte Entwicklungskonzept beschrieben sowie die verschiedenen Funktionen der Document Suite und ihre Ergebnisse vorgestellt.

Kunze, M and Rösner, D (2003)

Natural Language Processing for Web Document Analysis

In this chapter we present an approach to the analysis of web documents — and other electronically available document collections — that is based on the combination of XML technology with NLP techniques. A key issue addressed is to offer end users a collection of highly interoperable and flexible tools for their experiments with document collections. These tools should be easy to use and as robust as possible. XML is chosen as a uniform encoding for all kinds of data: input and output of modules, process information and linguistic resources. This allows effective sharing and reuse of generic solutions for many tasks (e.g. search, presentation, statistics, transformation).


Piotrowski, M (2009)

Document-Oriented E-Learning Components

This dissertation questions the common assumption that e-learning requires a learning management system (LMS) such as Moodle or Blackboard. Based on an analysis of the current state of the art in LMSs, we come to the conclusion that the functionality of conventional e-learning platforms consists of basic content management and communications facilities (such as forums, chats, wikis, etc.) and functionality for assessment (such as quizzes). However, only assessment functionality is actually specific to e-learning. Furthermore, the content management and communication functionality in e-learning platforms is typically restricted and often inferior when compared with the more general implementations available in Web content management systems. Since content management systems (CMS) offer more general and more robust functions for managing content, we argue that e-learning platforms should be based on content management systems. Only assessment functions are actually specific to e-learning and need to be added to a CMS; this requires the architecture of the CMS to be modular. As a proof of concept, we have designed and implemented the eduComponents, a component-based e-learning system architecture, realized as software components extending a general-purpose content management system with facilities for course management and assessment. The eduComponents have been successfully used since several semesters at Otto von Guericke University and other institutions. The experience with the eduComponents gives practical evidence for the theses we have put forward in this dissertation and of the feasibility of the eduComponents approach. The research done for this dissertation has also resulted in practical definitions for e-learning and e-learning platform, terms which are notoriously ill-defined. Based on these definitions, we have developed an innovative way to assess and to visualize the areas of functionality of e-learning environments.


Dervaric, C (2007)

Erkennung und Behandlung von Plagiaten bei Lösungen zu Übungsaufgaben

Den Lehrenden ein geeignetes Mittel in die Hand zu geben, Plagiate unter den studentischen Einreichungen leichter zu erkennen, soll Aufgabe und Ziel dieser Diplomarbeit sein. Dafür sollen die so genannten EduComponents, insbesondere die dort integrierte ECAssignmentBox, um eine Funktion erweitert werden, die es den Lehrenden erlaubt, mögliche Plagiate zu identifizieren und diese dann auf komfortable Weise überprüfen zu können. Die Einreichungen können sowohl Programmieraufgaben als auch freie Texte in natürlicher Sprache beinhalten.

Feustel, T (2006)

Analyse von Texteingaben in einem CAA-Werkzeug zur elektronischen Einreichung und Auswertung von Aufgaben

Die Master Thesis beschäftigt sich mit der Analyse von Texteingaben in einem CAA-Werkzeug. Zu untersuchen sind die Möglichkeiten solcher Texteingaben und deren Kontrolle. Fragen vom Typ Textantwortfragen und mathematische Ausdrücke bieten sich für die Untersuchung an. Die Idee ist, die Eingaben in das CAA-System in Freitext vorzunehmen und dem Studenten somit größtmögliche Freiheit bei der Beantwortung der Fragen zu geben. Auf der Untersuchung aufbauend soll ein Prototyp implementiert und evaluiert werden. Die Implementierung erfolgt in Python und ist eine Zusatzkomponente der ECAutoAssessmentBox, die unter dem Content Management System Plone zum Einsatz kommt.

Denecke, M (2005)

Strukturanalyse von Dokumenten im E-Commerce

Die vorliegende Arbeit untersucht die Struktur von Dokumenten im Electronic Commerce anhand Allgemeiner Geschäftsbedingungen ausgewählter E-Shops aus dem Bereich des Buchhandels.

Veeramachaneni, B (2005)

Conceptualization of Teaching Material

The Internet and World Wide Web are being used as support aids to facilitate the delivery of teaching and learning materials. The content of the related courses taught at different universities and organizations tend to be strikingly similar. The gains resulted by sharing the teaching material are high. The problem here is that most systems use different formats, languages and vocabularies to represent and to store these resources. Hence there is no way for two different applications to interoperate even if their teaching contents belong to the same domain and so the knowledge exposed by one cannot be used by another. A possible solution to the problem of sharing and reuse of learning resources is to have a shared vocabulary. Ontologies provide a shared and common understanding of a domain that can be communicated between people and heterogeneous application systems. An important aspect of interoperability of learning objects is a common format for describing content. In this thesis, ontologies for the teaching material is developed. Ontologies for the content and metadata of teaching material is developed. Metadata helps people organize, find, and use resources effectively. The IEEE Learning Object Metadata (LOM) was developed to provide structured metadata descriptions of learning resources called Learning Objects in order to enable semantic interoperability among applications on the e-learning domain. The metadata properties adequate for this application are used from IEEE LOM. If the applications share the common ontology of teaching material then the teaching material of one application can be used by another, it also provides intelligent integration such as sharing, searching and reusing information among applications.


Powered by Plone, the Open Source Content Management System