Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02

Ludwig-Maximilians-Universität München

About

Die Universitätsbibliothek (UB) verfügt über ein umfangreiches Archiv an elektronischen Medien, das von Volltextsammlungen über Zeitungsarchive, Wörterbücher und Enzyklopädien bis hin zu ausführlichen Bibliographien und mehr als 1000 Datenbanken reicht. Auf iTunes U stellt die UB unter anderem eine Auswahl an Dissertationen der Doktorandinnen und Doktoranden an der LMU bereit. (Dies ist der 1. von 2 Teilen der Sammlung 'Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU'.)

Available on

Community

250 episodes

Generalized Bayesian inference under prior-data conflict

This thesis is concerned with the generalisation of Bayesian inference towards the use of imprecise or interval probability, with a focus on model behaviour in case of prior-data conflict. Bayesian inference is one of the main approaches to statistical inference. It requires to express (subjective) knowledge on the parameter(s) of interest not incorporated in the data by a so-called prior distribution. All inferences are then based on the so-called posterior distribution, the subsumption of prior knowledge and the information in the data calculated via Bayes' Rule. The adequate choice of priors has always been an intensive matter of debate in the Bayesian literature. While a considerable part of the literature is concerned with so-called non-informative priors aiming to eliminate (or, at least, to standardise) the influence of priors on posterior inferences, inclusion of specific prior information into the model may be necessary if data are scarce, or do not contain much information about the parameter(s) of interest; also, shrinkage estimators, common in frequentist approaches, can be considered as Bayesian estimators based on informative priors. When substantial information is used to elicit the prior distribution through, e.g, an expert's assessment, and the sample size is not large enough to eliminate the influence of the prior, prior-data conflict can occur, i.e., information from outlier-free data suggests parameter values which are surprising from the viewpoint of prior information, and it may not be clear whether the prior specifications or the integrity of the data collecting method (the measurement procedure could, e.g., be systematically biased) should be questioned. In any case, such a conflict should be reflected in the posterior, leading to very cautious inferences, and most statisticians would thus expect to observe, e.g., wider credibility intervals for parameters in case of prior-data conflict. However, at least when modelling is based on conjugate priors, prior-data conflict is in most cases completely averaged out, giving a false certainty in posterior inferences. Here, imprecise or interval probability methods offer sound strategies to counter this issue, by mapping parameter uncertainty over sets of priors resp. posteriors instead of over single distributions. This approach is supported by recent research in economics, risk analysis and artificial intelligence, corroborating the multi-dimensional nature of uncertainty and concluding that standard probability theory as founded on Kolmogorov's or de Finetti's framework may be too restrictive, being appropriate only for describing one dimension, namely ideal stochastic phenomena. The thesis studies how to efficiently describe sets of priors in the setting of samples from an exponential family. Models are developed that offer enough flexibility to express a wide range of (partial) prior information, give reasonably cautious inferences in case of prior-data conflict while resulting in more precise inferences when prior and data agree well, and still remain easily tractable in order to be useful for statistical practice. Applications in various areas, e.g. common-cause failure modeling and Bayesian linear regression, are explored, and the developed approach is compared to other imprecise probability models.

1s
Oct 25, 2013
Regularity for degenerate elliptic and parabolic systems

In this work local behavior for solutions to the inhomogeneous p-Laplace in divergence form and its parabolic version are studied. It is parabolic and non-linear generalization of the Calderon-Zygmund theory for the Laplace operator. I.e. the borderline case BMO is studied. The two main results are local BMO and Hoelder estimates for the inhomogenious p-Laplace and the parabolic p-Laplace system. An adaption of some estimates to fluid mechanics, namely on the p-Stokes equation are also proven. The p-Stokes system is a very important physical model for so-called non Newtonian fluids (e.g. blood). For this system BMO and Hoelder estimates are proven in the stationary 2-dimensional case.

1s
Oct 14, 2013
Reifegradmodelle für Werkzeuglandschaften zur Unterstützung von ITSM-Prozessen

Dienstleister aus dem Bereich der Informationstechnologie (IT) stehen vor der großen Herausforderung, immer komplexere IT-Dienste kostengünstig anzubieten und diese effizient zu betreiben. Um dies zu erzielen, führt die Disziplin des IT-Service-Management (ITSM) strukturierte Managementprozesse ein. Werkzeuge unterstützen diese und stellen eine wichtige Schnittstelle zwischen Mensch, Prozess und Technik dar. Mit diesen Werk- zeugen lassen sich Prozesse koordinieren, die Technik effizient verwalten und wichtige Informationen für den Betrieb zusammenzuführen. Der geeignete Einsatz von Werkzeugen ist eine wesentliche Voraussetzung, um komplexe Aufgaben mit möglichst geringem Aufwand durchzuführen. Effizientes ITSM verfolgt somit auch stets das Ziel, Werkzeuge optimal einzusetzen und die ITSM-Prozesse sinnvoll zu unterstützen. Im Rahmen der Arbeit wird ein Ansatz vorgestellt, um den Einsatz von Werkzeugen entsprechend zu optimieren. Kern des Lösungsansatzes ist die Definition eines Reifegradmodells für Werkzeuglandschaften. Mit diesem lassen sich Werkzeuglandschaften begutachten und die Unterstützung der ITSM-Prozesse systematisch bewerten. Das Resultat ist eine gewichtete Liste mit Anforderungen an die Werkzeuglandschaft, um eine möglichst gute Prozessunterstützung zu erreichen. Aufgrund der Priorisierung der Anforderungen ist ein IT-Dienstleister nicht gezwungen, die Werkzeuglandschaft komplett in einem großen Schritt anzupassen. Stattdessen können die Verbesserungen sukzessive vorgenommen werden. Das Reifegradmodell unterstützt systematisch dabei, zunächst die wichtigsten Anforderungen umzusetzen, so dass die ITSM-Prozesse effektiv arbeiten können. Die Steigerung der Effizienz erfolgt dann in weiteren Schritten, indem zusätzliche Anforderungen umgesetzt werden. Die Erstellung eines solchen Reifegradmodells wird im Folgenden beschrieben. Zunächst wurden Anforderungen an einen geeigneten Lösungsansatz analysiert und ein Konzept für ein Reifegradmodell erarbeitet. Darauf aufbauend ist dieses Konzept beispielhaft angewendet worden, um ein Reifegradmodell für Werkzeuglandschaften zur Unterstützung von Prozessen nach ISO/IEC 20000 zu entwickeln. Die Arbeit schließt mit einer Evaluation des Lösungsansatzes ab, wobei das entwickelte Reifegradmodell empirisch in einem Szenario eines IT-Dienstleisters angewendet wurde. Mit der vorliegenden Arbeit wird die Grundlage für ein ganzheitliches und integriertes Management der Werkzeuglandschaft von IT-Dienstleistern geschaffen. Künftige Arbeiten können diese Methodik für spezifische Anwendungsszenarien übernehmen. Langfristig soll diese Arbeit als Grundlage dienen, um ein standardisiertes Reifegradmodell für Werkzeuglandschaften im Kontext von ITSM zu etablieren.

1s
Sep 11, 2013
Similarity search and mining in uncertain spatial and spatio-temporal databases

Both the current trends in technology such as smart phones, general mobile devices, stationary sensors and satellites as well as a new user mentality of utilizing this technology to voluntarily share information produce a huge flood of geo-spatial and geo-spatio-temporal data. This data flood provides a tremendous potential of discovering new and possibly useful knowledge. In addition to the fact that measurements are imprecise, due to the physical limitation of the devices, some form of interpolation is needed in-between discrete time instances. From a complementary perspective - to reduce the communication and bandwidth utilization, along with the storage requirements, often the data is subjected to a reduction, thereby eliminating some of the known/recorded values. These issues introduce the notion of uncertainty in the context of spatio-temporal data management - an aspect raising an imminent need for scalable and flexible data management. The main scope of this thesis is to develop effective and efficient techniques for similarity search and data mining in uncertain spatial and spatio-temporal data. In a plethora of research fields and industrial applications, these techniques can substantially improve decision making, minimize risk and unearth valuable insights that would otherwise remain hidden. The challenge of effectiveness in uncertain data is to correctly determine the set of possible results, each associated with the correct probability of being a result, in order to give a user a confidence about the returned results. The contrary challenge of efficiency, is to compute these result and corresponding probabilities in an efficient manner, allowing for reasonable querying and mining times, even for large uncertain databases. The paradigm used to master both challenges, is to identify a small set of equivalent classes of possible worlds, such that members of the same class can be treated as equivalent in the context of a given query predicate or data mining task. In the scope of this work, this paradigm will be formally defined, and applied to the most prominent classes of spatial queries on uncertain data, including range queries, k-nearest neighbor queries, ranking queries and reverse k-nearest neighbor queries. For this purpose, new spatial and probabilistic pruning approaches are developed to further speed up query processing. Furthermore, the proposed paradigm allows to develop the first efficient solution for the problem of frequent co-location mining on uncertain data. Special emphasis is taken on the temporal aspect of applications using modern data collection technologies. While the aforementioned techniques work well for single points of time, the prediction of query results over time remains a challenge. This thesis fills this gap by modeling an uncertain spatio-temporal object as a stochastic process, and by applying the above paradigm to efficiently query, index and mine historical spatio-temporal data.

1s
Aug 23, 2013
Tensor factorization for relational learning

Relational learning is concerned with learning from data where information is primarily represented in form of relations between entities. In recent years, this branch of machine learning has become increasingly important, as relational data is generated in an unprecedented amount and has become ubiquitous in many fields of application such as bioinformatics, artificial intelligence and social network analysis. However, relational learning is a very challenging task, due to the network structure and the high dimensionality of relational data. In this thesis we propose that tensor factorization can be the basis for scalable solutions for learning from relational data and present novel tensor factorization algorithms that are particularly suited for this task. In the first part of the thesis, we present the RESCAL model -- a novel tensor factorization for relational learning -- and discuss its capabilities for exploiting the idiosyncratic properties of relational data. In particular, we show that, unlike existing tensor factorizations, our proposed method is capable of exploiting contextual information that is more distant in the relational graph. Furthermore, we present an efficient algorithm for computing the factorization. We show that our method achieves better or on-par results on common benchmark data sets, when compared to current state-of-the-art relational learning methods, while being significantly faster to compute. In the second part of the thesis, we focus on large-scale relational learning and its applications to Linked Data. By exploiting the inherent sparsity of relational data, an efficient computation of RESCAL can scale up to the size of large knowledge bases, consisting of millions of entities, hundreds of relations and billions of known facts. We show this analytically via a thorough analysis of the runtime and memory complexity of the algorithm as well as experimentally via the factorization of the YAGO2 core ontology and the prediction of relationships in this large knowledge base on a single desktop computer. Furthermore, we derive a new procedure to reduce the runtime complexity for regularized factorizations from O(r^5) to O(r^3) -- where r denotes the number of latent components of the factorization -- by exploiting special properties of the factorization. We also present an efficient method for including attributes of entities in the factorization through a novel coupled tensor-matrix factorization. Experimentally, we show that RESCAL allows us to approach several relational learning tasks that are important to Linked Data. In the third part of this thesis, we focus on the theoretical analysis of learning with tensor factorizations. Although tensor factorizations have become increasingly popular for solving machine learning tasks on various forms of structured data, there exist only very few theoretical results on the generalization abilities of these methods. Here, we present the first known generalization error bounds for tensor factorizations. To derive these bounds, we extend known bounds for matrix factorizations to the tensor case. Furthermore, we analyze how these bounds behave for learning on over- and understructured representations, for instance, when matrix factorizations are applied to tensor data. In the course of deriving generalization bounds, we also discuss the tensor product as a principled way to represent structured data in vector spaces for machine learning tasks. In addition, we evaluate our theoretical discussion with experiments on synthetic data, which support our analysis.

1s
Aug 14, 2013
Towards an arithmetic for partial computable functionals

The thesis concerns itself with nonflat Scott information systems as an appropriate denotational semantics for the proposed theory TCF+, a constructive theory of higher-type partial computable functionals and approximations. We prove a definability theorem for type systems with at most unary constructors via atomic-coherent information systems, and give a simple proof for the density property for arbitrary finitary type systems using coherent information systems. We introduce the notions of token matrices and eigen-neighborhoods, and use them to locate normal forms of neighborhoods, as well as to demonstrate that even nonatomic information systems feature implicit atomicity. We then establish connections between coherent information systems and various pointfree structures. Finally, we introduce a fragment of TCF+ and show that extensionality can be eliminated.

1s
Aug 12, 2013
Bereitstellung von Umgebungsinformationen und Positionsdaten für ortsbezogene Dienste in Gebäuden

Mit dem Aufkommen und der steigenden Verbreitung von Smartphones, haben ortsbezogene Dienste einen festen Platz im täglichen Leben vieler Nutzer erhalten. Dabei werden auf Basis des Aufenthaltsortes gezielt Informationen gefiltert, Umgebungsinformationen verfügbar gemacht oder Suchergebnisse nach Lokalität bewertet. Zudem werden bestimmte Dienste, wie mobile Routenfindung und Navigation, ermöglicht. Viele Dienste beziehen nicht nur die Position eines Nutzers mit ein, sondern erlauben es, die Position von Freunden anzuzeigen oder automatische Benachrichtigungen beim Betreten bestimmter Regionen zu erzeugen. Erfordert ein ortsbezogener Dienst eine hohe Positionsgenauigkeit, so wird die Position globale Satellitennavigationssysteme bestimmt. Auch in großen komplexen Gebäuden, wie Museen, Flughäfen oder Krankenhäusern, besteht Bedarf an ortsbezogenen Informationen. Beispiele hierfür sind die Suche nach einem speziellen Ausstellungsstück im Museum, die Navigation zum richtigen Gate am Flughafen oder das Treffen mit einem Freund im selben Gebäude. Solche ortsbezogene Dienste in Gebäuden werden im folgenden auch mit dem englischen Begriff Indoor-Location Based Services (I-LBS) bezeichnet. Sie vereinfachen in vielen Situationen unser Leben und werden zukünftig eine ähnliche Verbreitung wie herkömmliche ortbezogene Dienste erlangen. Derzeit existiert jedoch keine Lösung, die I-LBS flächendeckend ermöglicht. Dazu gibt es vor allem zwei Gründe: Zum einen gibt es im Gegensatz zu Außenbereichen keine allgemein verfügbare Kartenbasis. Die Baupläne sind oftmals unter Verschluss und eignen sich mehr für die Planung und Überwachung von Baumaßnahmen als für den semantischen Informationsgewinn. Zum anderen ist der Empfang von Satellitensignalen in Gebäuden so schlecht, dass damit im allgemeinen keine genügend genaue Position bestimmt werden kann. Eine alternative kostengünstige und überall verfügbare Positionsbestimmung von genügend hoher Genauigkeit existiert derzeit nicht. In dieser Arbeit werden Lösungsmöglichkeiten für beide Probleme vorgestellt und evaluiert, die einem Nutzer eine vergleichbare Dienstnutzung erlauben sollen, wie er es in Außenbereichen bereits gewöhnt ist. Anhand der Anforderungen von I-LBS und Ortungssystemen werden zwei verschiedene Umgebungsmodelle entwickelt. Eines basiert auf der Geography Markup Language (GML) und bietet eine flexible Vektor-basierte Repräsentation eines Gebäudes mit hierarchischen und Graph-basierten Elementen. Zudem wird die vollautomatische Erzeugung eines solchen Modells aus Bauplänen vorgestellt, die einen weiteren Schritt zur flächendeckenden Bereitstellung von Plänen für I-LBS darstellt. Das andere Modell basiert auf einer Bitmap als Raster-basierter Kartendarstellung, welche mithilfe von Bildbearbeitungsalgorithmen und Konventionen in der Farbgebung semantisch angereichert wird. Auch hier werden Möglichkeiten zur automatischen Erzeugung des semantischen Modells, beispielsweise aus abfotografierten Fluchtplänen, erörtert. In einem letzten Schritt werden beide Modelle in einem flexiblen hybriden Umgebungsmodell kombiniert, um Anfragen je nach Datenbasis möglichst effizient beantworten zu können. Die Positionsbestimmung in Gebäuden wird anhand von einigen Verbesserungen für Fingerprinting-Ansätze auf Smartphones behandelt. Das Fingerprinting basiert dabei entweder auf Kamerabildern oder auf WLAN-Signalen. Zudem werden zusätzliche Sensoren, wie Kompass und Beschleunigungssensor, zur Verbesserung der Genauigkeit und Robustheit hinzugenommen. Um die Positionsbestimmung für den Einsatz in I-LBS verfügbar zu machen, ist jedoch nicht nur eine hohe Genauigkeit, sondern vor allem eine große Flexibilität die Hauptanforderung. Zu diesem Zweck wurde ein Ansatz entwickelt, welcher ohne Nutzerinteraktion allein auf Basis von Kartenmaterial und inertialen Sensoren ein oder mehrerer Nutzer eine Fingerprint-Datenbank erzeugt, welche anderen Nutzern zur Verfügung gestellt werden kann. Mit dem Ziel der Kosten- und Komplexitätsreduktion, sowie der Lösung des Problems der Aktualität von Daten in Fingerprint-Datenbanken, hilft der Ansatz bei der automatischen flächendeckenden Ausbringung von Referenzdaten zur Positionsbestimmung. Um die Brücke zwischen I-LBS und LBS zu schlagen, reicht es allerdings nicht aus, beide Arten von Diensten getrennt zu betrachten. Eine nahtlose Dienstnutzung muss möglich sein und somit werden sowohl eine nahtlose Positionsbestimmung, als auch eine nahtlose Bereitstellung von Kartenmaterial notwendig. Zu diesem Zweck wurde ein Plattform entwickelt, welche auf Basis einer Sensorbeschreibungssprache automatisch die Auswahl und Kombination der zu nutzenden Sensoren zur Positionsbestimmung ermittelt. Zudem verfügt die Plattform über eine Komponente, die auf Basis der Positionsdaten passende Umgebungsmodelle zur Verfügung stellt und die Transformation von Positionsdaten zwischen verschiedenen Modellen ermöglicht.

1s
Jul 22, 2013
Universal moduli spaces in Gromov-Witten theory

The construction of manifold structures and fundamental classes on the (compactifed) moduli spaces appearing in Gromov-Witten theory is a long-standing problem. Up until recently, most successful approaches involved the imposition of topological constraints like semi-positivity on the underlying symplectic manifold to deal with this situation. One conceptually very appealing approach that removed most of these restrictions is the approach by K. Cieliebak and K. Mohnke via complex hypersurfaces, [CM07]. In contrast to other approaches using abstract perturbation theory, it has the advantage that the objects to be studied still are spaces of holomorphic maps defined on Riemann surfaces. In this thesis this approach is generalised from the case of surfaces of genus 0 dealt with in [CM07] to the general case. In the first section the spaces of Riemann surfaces are introduced, that take the place of the Deligne-Mumford spaces in order to deal with the fact that the latter are orbifolds. Also, for use in the later parts, the interrelations of these for different numbers of marked points are clarified. After a preparatory section on Sobolev spaces of sections in a fibration, the results presented there are then used, after a short exposition on Hamiltonian perturbations and the associated moduli spaces of perturbed curves, to construct a decomposition of the universal moduli space into smooth Banach manifolds. The focus there lies mainly on the global aspects of the construction, since the local picture, i.e. the actual transversality of the universal Cauchy-Riemann operator to the zero section, is well understood. Then the compactification of this moduli space in the presence of bubbling is presented and the later construction is motivated and a rough sketch of the basic idea behind it is given. In the last part of the first chapter, the necessary definitions and results are given that are needed to transfer the results on moduli spaces of curves with tangency conditions from [CM07]. There also the necessary restrictions on the almost complex structures and Hamiltonian perturbations from [IP03] are incorporated, that later allow the use of the compactness theorem proved in that reference. In the last part of this thesis, these results are then used to give a definition of a Gromov-Witten pseudocycle, using an adapted version of the moduli spaces of curves with additional marked points that are mapped to a complex hypersurface from [CM07]. Then a proof that this is well-defined is given, using the compactness theorem from [IP03] to get a description of the boundary and the constructions from the previous parts to cover the boundary by manifolds of the correct dimensions.

1s
Jul 10, 2013
Bayesian regularization in regression models for survival data

This thesis is concerned with the development of flexible continuous-time survival models based on the accelerated failure time (AFT) model for the survival time and the Cox relative risk (CRR) model for the hazard rate. The flexibility concerns on the one hand the extension of the predictor to take into account simultaneously for a variety of different forms of covariate effects. On the other hand, the often too restrictive parametric assumptions about the survival distribution are replaced by semiparametric approaches that allow very flexible shapes of survival distribution. We use the Bayesian methodology for inference. The arising problems, like e. g. the penalization of high-dimensional linear covariate effects, the smoothing of nonlinear effects as well as the smoothing of the baseline survival distribution, are solved with the application of regularization priors tailored for the respective demand. The considered expansion of the two survival model classes enables to deal with various challenges arising in practical analysis of survival data. For example the models can deal with high-dimensional feature spaces (e. g. gene expression data), they facilitate feature selection from the whole set or a subset of the available covariates and enable the simultaneous modeling of any type of nonlinear covariate effects for covariates that should always be included in the model. The option of the nonlinear modeling of covariate effects as well as the semiparametric modeling of the survival time distribution enables furthermore also a visual inspection of the linearity assumptions about the covariate effects or accordingly parametric assumptions about the survival time distribution. In this thesis it is shown, how the p>n paradigm, feature relevance, semiparametric inference for functional effect forms and the semiparametric inference for the survival distribution can be treated within a unified Bayesian framework. Due the option to control the amount of regularization of the considered priors for the linear regression coefficients, there is no need to distinguish conceptionally between the cases pn. To accomplish the desired regularization, the regression coefficients are associated with shrinkage, selection or smoothing priors. Since the utilized regularization priors all facilitate a hierarchical representation, the resulting modular prior structure, in combination with adequate independence assumptions for the prior parameters, enables to establish a unified framework and the possibility to construct efficient MCMC sampling schemes for joint shrinkage, selection and smoothing in flexible classes of survival models. The Bayesian formulation enables therefore the simultaneous estimation of all parameters involved in the models as well as prediction and uncertainty statements about model specification. The presented methods are inspired from the flexible and general approach for structured additive regression (STAR) for responses from an exponential family and CRR-type survival models. Such systematic and flexible extensions are in general not available for AFT models. An aim of this work is to extend the class of AFT models in order to provide such a rich class of models as resulting from the STAR approach, where the main focus relies on the shrinkage of linear effects, the selection of covariates with linear effects together with the smoothing of nonlinear effects of continuous covariates as representative of a nonlinear modeling. Combined are in particular the Bayesian lasso, the Bayesian ridge and the Bayesian NMIG (a kind of spike-and-slab prior) approach to regularize the linear effects and the P-spline approach to regularize the smoothness of the nonlinear effects and the baseline survival time distribution. To model a flexible error distribution for the AFT model, the parametric assumption for the baseline error distribution is replaced by the assumption of a finite Gaussian mixture distribution. For the special case of specifying one basis mixture component the estimation problem essentially boils down to estimation of log-normal AFT model with STAR predictor. In addition, the existing class of CRR survival models with STAR predictor, where also baseline hazard rate is approximated by a P-spline, is expanded to enable the regularization of the linear effects with the mentioned priors, which broadens further the area of application of this rich class of CRR models. Finally, the combined shrinkage, selection and smoothing approach is also introduced to the semiparametric version of the CRR model, where the baseline hazard is unspecified and inference is based on the partial likelihood. Besides the extension of the two survival model classes the different regularization properties of the considered shrinkage and selection priors are examined. The developed methods and algorithms are implemented in the public available software BayesX and in R-functions and the performance of the methods and algorithms is extensively tested by simulation studies and illustrated through three real world data sets.

1s
Jun 20, 2013
Kooperative Mobilität in Megastädten

Mobilität in Form des Transports von Waren und Personen ist ein wesentlicher Bestandteil unserer heutigen Gesellschaft, da diese einen enormen Einfluss auf die Wirtschaftlichkeit und das soziale Leben hat. Nichts verkörpert die Begriffe Individualität, Flexibilität und Freiheit mehr als das eigene Auto und ist - in der Masse - gleichzeitig deren größte Bedrohung. Insbesondere in Megastädten konzentrieren sich die mit dem Verkehr verbundenen Probleme, die neben Staus auch zu einer überlasteten Infrastruktur führen und erhebliche Konsequenzen für die Umwelt nach sich ziehen. Im Rahmen dieser Arbeit werden einige Ansätze vorgestellt und deren technische Umsetzung erläutert. Aus Sicht der Benutzer werden Anwendungen zur Förderung des kollektiven und gemeinschaftlichen Transports sowie ein Ansatz zur gemeinschaftlichen Parkraumverwaltung präsentiert. Im Anschluss wird aus der Sicht der Mobilitätsanbieter ein kooperativer Ansatz für einen flexiblen und bedarfsorientierten Tür-zu-Tür Transportdienst beschrieben. Abschließend wird auf ein System zur gemeinschaftlichen Schadstoffüberwachung eingegangen, welches einerseits eine detaillierte Grundlage für Infrastrukturbetreiber und Stadtplaner bietet und andererseits als Basis für umweltsensitive Anwendungen genutzt werden kann. Mit der Unterstützung von Informations- und Kommunikationstechnologien in Kombination mit mobilen Endgeräten sowie auf der Basis des gemeinschaftlichen Zusammenwirkens, leisten die entwickelten Anwendungen und Systeme damit einen Beitrag zur Förderung einer effizienten und nachhaltigen Mobilität in Megastädten.

1s
Jun 12, 2013
Determining high-risk zones by using spatial point process methodology

Methods for constructing high-risk zones, which can be used in situations where a spatial point pattern has been observed incompletely, are introduced and evaluated with regard to unexploded bombs in federal properties in Germany. Unexploded bombs from the Second World War represent a serious problem in Germany. It is desirable to search high-risk zones for unexploded bombs, but this causes high costs, so the search is usually restricted to carefully selected areas. If suitable aerial pictures of the area in question exist, statistical methods can be used to determine such zones by considering patterns of exploded bombs as realisations of spatial point processes. The patterns analysed in this thesis were provided by Oberfinanzdirektion Niedersachsen, which supports the removal of unexploded ordnance in federal properties in Germany. They were derived from aerial pictures taken by the Allies during and after World War II. The main task consists of finding as small regions as possible containing as many unexploded bombs as possible. In this thesis, an approach based on the intensity function of the process is introduced: The high-risk zones consist of those parts of the observation window where the estimated intensity is largest, i.e. the estimated intensity function exceeds a cut-off value c. The cut-off value can be derived from the risk associated with the high-risk zone. This risk is defined as the probability that there are unexploded bombs outside the zone. A competing approach for determining high-risk zones consists in using the union of discs around all exploded bombs as high-risk zone. The radius is chosen as a high quantile of the nearest-neighbour distance of the point pattern. In an evaluation procedure, both methods yield comparably good results, but the theoretical properties of the intensity-based high-risk zones are considerably better. A further goal is to perform a risk assessment of the investigated area by estimating the probability that there are unexploded bombs outside the high-risk zone. This is especially important as the estimation of the intensity function is a crucial issue for the intensity-based method, so the risk cannot be determined exactly in advance. A procedure to calculate the risk is introduced. By using a bootstrap correction, it is possible to decide on acceptable risks and find the optimal, i.e. smallest, high-risk zone for a fixed probability that not all unexploded bombs are located inside the high-risk zone. The consequences of clustering are investigated in a sensitivity analysis by exploiting the procedure for calculating the risk. Furthermore, different types of models which account for clustering are fitted to the data, classical cluster models as well as a mixture of bivariate normal distributions.

1s
Jun 07, 2013
Context based bioinformatics

The goal of bioinformatics is to develop innovative and practical methods and algorithms for bio- logical questions. In many cases, these questions are driven by new biotechnological techniques, especially by genome and cell wide high throughput experiment studies. In principle there are two approaches: 1. Reduction and abstraction of the question to a clearly defined optimization problem, which can be solved with appropriate and efficient algorithms. 2. Development of context based methods, incorporating as much contextual knowledge as possible in the algorithms, and derivation of practical solutions for relevant biological ques- tions on the high-throughput data. These methods can be often supported by appropriate software tools and visualizations, allowing for interactive evaluation of the results by ex- perts. Context based methods are often much more complex and require more involved algorithmic techniques to get practical relevant and efficient solutions for real world problems, as in many cases already the simplified abstraction of problems result in NP-hard problem instances. In many cases, to solve these complex problems, one needs to employ efficient data structures and heuristic search methods to solve clearly defined sub-problems using efficient (polynomial) op- timization (such as dynamic programming, greedy, path- or tree-algorithms). In this thesis, we present new methods and analyses addressing open questions of bioinformatics from different contexts by incorporating the corresponding contextual knowledge. The two main contexts in this thesis are the protein structure similarity context (Part I) and net- work based interpretation of high-throughput data (Part II). For the protein structure similarity context Part I we analyze the consistency of gold standard structure classification systems and derive a consistent benchmark set usable for different ap- plications. We introduce two methods (Vorolign, PPM) for the protein structure similarity recog- nition problem, based on different features of the structures. Derived from the idea and results of Vorolign, we introduce the concept of contact neighbor- hood potential, aiming to improve the results of protein fold recognition and threading. For the re-scoring problem of predicted structure models we introduce the method Vorescore, clearly improving the fold-recognition performance, and enabling the evaluation of the contact neighborhood potential for structure prediction methods in general. We introduce a contact consistent Vorolign variant ccVorolign further improving the structure based fold recognition performance, and enabling direct optimization of the neighborhood po- tential in the future. Due to the enforcement of contact-consistence, the ccVorolign method has much higher computational complexity than the polynomial Vorolign method - the cost of com- puting interpretable and consistent alignments. Finally, we introduce a novel structural alignment method (PPM) enabling the explicit modeling and handling of phenotypic plasticity in protein structures. We employ PPM for the analysis of effects of alternative splicing on protein structures. With the help of PPM we test the hypothesis, whether splice isoforms of the same protein can lead to protein structures with different folds (fold transitions). In Part II of the thesis we present methods generating and using context information for the interpretation of high-throughput experiments. For the generation of context information of molecular regulations we introduce novel textmin- ing approaches extracting relations automatically from scientific publications. In addition to the fast NER (named entity recognition) method (syngrep) we also present a novel, fully ontology-based context-sensitive method (SynTree) allowing for the context-specific dis- ambiguation of ambiguous synonyms and resulting in much better identification performance. This context information is important for the interpretation of high-throughput data, but often missing in current databases. Despite all improvements, the results of automated text-mining methods are error prone. The RelAnn application presented in this thesis helps to curate the automatically extracted regula- tions enabling manual and ontology based curation and annotation. For the usage of high-throughput data one needs additional methods for data processing, for example methods to map the hundreds of millions short DNA/RNA fragments (so called reads) on a reference genome or transcriptome. Such data (RNA-seq reads) are the output of next generation sequencing methods measured by sequencing machines, which are becoming more and more efficient and affordable. Other than current state-of-the-art methods, our novel read-mapping method ContextMap re- solves the occurring ambiguities at the final step of the mapping process, employing thereby the knowledge of the complete set of possible ambiguous mappings. This approach allows for higher precision, even if more nucleotide errors are tolerated in the read mappings in the first step. The consistence between context information of molecular regulations stored in databases and extracted from textmining against measured data can be used to identify and score consistent reg- ulations (GGEA). This method substantially extends the commonly used gene-set based methods such over-representation (ORA) and gene set enrichment analysis (GSEA). Finally we introduce the novel method RelExplain, which uses the extracted contextual knowl- edge and generates network-based and testable hypotheses for the interpretation of high-throughput data.

1s
May 10, 2013
Regularized estimation and model selection in compartment models

Dynamic imaging series acquired in medical and biological research are often analyzed with the help of compartment models. Compartment models provide a parametric, nonlinear function of interpretable, kinetic parameters describing how some concentration of interest evolves over time. Aiming to estimate the kinetic parameters, this leads to a nonlinear regression problem. In many applications, the number of compartments needed in the model is not known from biological considerations but should be inferred from the data along with the kinetic parameters. As data from medical and biological experiments are often available in the form of images, the spatial data structure of the images has to be taken into account. This thesis addresses the problem of parameter estimation and model selection in compartment models. Besides a penalized maximum likelihood based approach, several Bayesian approaches-including a hierarchical model with Gaussian Markov random field priors and a model state approach with flexible model dimension-are proposed and evaluated to accomplish this task. Existing methods are extended for parameter estimation and model selection in more complex compartment models. However, in nonlinear regression and, in particular, for more complex compartment models, redundancy issues may arise. This thesis analyzes difficulties arising due to redundancy issues and proposes several approaches to alleviate those redundancy issues by regularizing the parameter space. The potential of the proposed estimation and model selection approaches is evaluated in simulation studies as well as for two in vivo imaging applications: a dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) study on breast cancer and a study on the binding behavior of molecules in living cell nuclei observed in a fluorescence recovery after photobleaching (FRAP) experiment.

1s
Apr 26, 2013
Remote tactile feedback on interactive surfaces

Direct touch input on interactive surfaces has become a predominating standard for the manipulation of digital information in our everyday lives. However, compared to our rich interchange with the physical world, the interaction with touch-based systems is limited in terms of flexibility of input and expressiveness of output. Particularly, the lack of tactile feedback greatly reduces the general usability of a touch-based system and hinders from a productive entanglement of the virtual information with the physical world. This thesis proposes remote tactile feedback as a novel method to provide programmed tactile stimuli supporting direct touch interactions. The overall principle is to spatially decouple the location of touch input (e.g. fingertip or hand) and the location of the tactile sensation on the user's body (e.g. forearm or back). Remote tactile feedback is an alternative concept which avoids particular challenges of existing approaches. Moreover, the principle provides inherent characteristics which can accommodate for the requirements of current and future touch interfaces. To define the design space, the thesis provides a structured overview of current forms of touch surfaces and identifies trends towards non-planar and non-rigid forms with more versatile input mechanisms. Furthermore, a classification highlights limitations of the current methods to generate tactile feedback on touch-based systems. The proposed notion of tactile sensory relocation is a form of sensory substitution. Underlying neurological and psychological principles corroborate the approach. Thus, characteristics of the human sense of touch and principles from sensory substitution help to create a technical and conceptual framework for remote tactile feedback. Three consecutive user studies measure and compare the effects of both direct and remote tactile feedback on the performance and the subjective ratings of the user. Furthermore, the experiments investigate different body locations for the application of tactile stimuli. The results show high subjective preferences for tactile feedback, regardless of its type of application. Additionally, the data reveals no significant differences between the effects of direct and remote stimuli. The results back the feasibility of the approach and provide parameters for the design of stimuli and the effective use of the concept. The main part of the thesis describes the systematical exploration and analysis of the inherent characteristics of remote tactile feedback. Four specific features of the principle are identified: (1) the simplification of the integration of cutaneous stimuli, (2) the transmission of proactive, reactive and detached feedback, (3) the increased expressiveness of tactile sensations and (4) the provision of tactile feedback during multi-touch. In each class, several prototypical remote tactile interfaces are used in evaluations to analyze the concept. For example, the PhantomStation utilizes psychophysical phenomena to reduce the number of single tactile actuators. An evaluation with the prototype compares standard actuator technologies with each other in order to enable simple and scalable implementations. The ThermalTouch prototype creates remote thermal stimuli to reproduce material characteristics on standard touchscreens. The results show a stable rate of virtual object discrimination based on remotely applied temperature profiles. The AutmotiveRTF system is implemented in a vehicle and supports the driver's input on the in-vehicle-infotainment system. A field study with the system focuses on evaluating the effects of proactive and reactive feedback on the user's performance. The main contributions of the dissertation are: First, the thesis introduces the principle of remote tactile feedback and defines a design space for this approach as an alternative method to provide non-visual cues on interactive surfaces. Second, the thesis describes technical examples to rapidly prototype remote tactile feedback systems. Third, these prototypes are deployed in several evaluations which highlight the beneficial subjective and objective effects of the approach. Finally, the thesis presents features and inherent characteristics of remote tactile feedback as a means to support the interaction on today's touchscreens and future interactive surfaces.

1s
Apr 25, 2013
Modeling of dynamic systems with Petri nets and fuzzy logic

Aktuelle Methoden zur dynamischen Modellierung von biologischen Systemen sind für Benutzer ohne mathematische Ausbildung oft wenig verständlich. Des Weiteren fehlen sehr oft genaue Daten und detailliertes Wissen über Konzentrationen, Reaktionskinetiken oder regulatorische Effekte. Daher erfordert eine computergestützte Modellierung eines biologischen Systems, mit Unsicherheiten und grober Information umzugehen, die durch qualitatives Wissen und natürlichsprachliche Beschreibungen zur Verfügung gestellt wird. Der Autor schlägt einen neuen Ansatz vor, mit dem solche Beschränkungen überwunden werden können. Dazu wird eine Petri-Netz-basierte graphische Darstellung von Systemen mit einer leistungsstarken und dennoch intuitiven Fuzzy-Logik-basierten Modellierung verknüpft. Der Petri Netz und Fuzzy Logik (PNFL) Ansatz erlaubt eine natürlichsprachlich-basierte Beschreibung von biologischen Entitäten sowie eine Wenn-Dann-Regel-basierte Definition von Reaktionen. Beides kann einfach und direkt aus qualitativem Wissen abgeleitet werden. PNFL verbindet damit qualitatives Wissen und quantitative Modellierung.

1s
Apr 19, 2013
Der verteilte Fahrerinteraktionsraum

Fahrrelevante und unterhaltungsbezogene Informationen werden, historisch betrachtet, räumlich getrennt im Fahrzeuginnenraum angeordnet: Für die Fahraufgabe notwendige Anzeigen befinden sich direkt vor dem Fahrer (Kombiinstrument und Head-Up Display) und Inhalte des Fahrerinformationssystems in der Mittelkonsole (zentrales Informationsdisplay). Aktuell ist eine Auflösung dieser strikten Trennung zu beobachten. Beispielsweise werden im Kombiinstrument Teilumfänge der Infotainmentinhalte abgerufen und bedient. Um dem Fahrer einen sicheren Umgang mit den zunehmenden Infotainmentinhalten zu ermöglichen, die Komplexität des Fahrerinteraktionsraumes zu reduzieren und den Kundennutzen zu steigern, betrachtet die vorliegende Arbeit die derzeit isolierten Displays ganzheitlich und lotet die Grenzen der momentan strikten Informationsverteilung neu aus. Es werden Grundlagen für die verkehrsgerechte Bedienung und Darstellung verteilter Informationen abhängig von deren Anzeigefläche gelegt, Konzepte zur nutzerinitiierten Individualisierung entwickelt und das Zusammenspiel von unterschiedlichen Anzeigeflächen evaluiert. Die in dieser Arbeit durchgeführten Studien zeigen, dass der räumlich verteilte Fahrerinteraktionsraum die Bedienung des Fahrerinformationssystems für den Nutzer sicherer und attraktiver gestaltet.

1s
Feb 22, 2013
Pseudoholomorphic curves in exact courant algebroids

I introduced among other things the notion of generalized pseudoholomorphic curves and pairs. Furthermore, I studied their properties and their role in topological string theory.

1s
Feb 18, 2013
Advances in boosting of temporal and spatial models

Boosting is an iterative algorithm for functional approximation and numerical optimization which can be applied to solve statistical regression-type problems. By design, boosting can mimic the solutions of many conventional statistical models, such as the linear model, the generalized linear model, and the generalized additive model, but its strength is to enhance these models or even go beyond. It enjoys increasing attention since a) it is a generic algorithm, easily extensible to exciting new problems, and b) it can cope with``difficult'' data where conventional statistical models fail. In this dissertation, we design autoregressive time series models based on boosting which capture nonlinearity in the mean and in the variance, and propose new models for multi-step forecasting of both. We use a special version of boosting, called componentwise gradient boosting, which is innovative in the estimation of the conditional variance of asset returns by sorting out irrelevant (lagged) predictors. We propose a model which enables us not only to identify the factors which drive market volatility, but also to assess the specific nature of their impact. Therefore, we gain a deeper insight into the nature of the volatility processes. We analyze four broad asset classes, namely, stocks, commodities, bonds, and foreign exchange, and use a wide range of potential macro and financial drivers. The proposed model for volatility forecasting performs very favorably for stocks and commodities relative to the common GARCH(1,1) benchmark model. The advantages are particularly convincing for longer forecasting horizons. To our knowledge, the application of boosting to multi-step forecasting of either the mean or the variance has not been done before. In a separate study, we focus on the conditional mean of German industrial production. With boosting, we improve the forecasting accuracy when compared to several competing models including the benchmark in this field, the linear autoregressive model. In an exhaustive simulation study we show that boosting of high-order nonlinear autoregressive time series can be very competitive in terms of goodness-of-fit when compared to alternative nonparametric models. Finally, we apply boosting in a spatio-temporal context to data coming from outside the econometric field. We estimate the browsing pressure on young beech trees caused by the game species within the borders of the Bavarian Forest National Park ``Bayerischer Wald,'' Germany. We found that using the geographic coordinates of the browsing cases contributes considerably to the fit. Furthermore, this bivariate geographic predictor is better suited for prediction if it allows for abrupt changes in the browsing pressure.

1s
Jan 30, 2013
Similarity processing in multi-observation data

Many real-world application domains such as sensor-monitoring systems for environmental research or medical diagnostic systems are dealing with data that is represented by multiple observations. In contrast to single-observation data, where each object is assigned to exactly one occurrence, multi-observation data is based on several occurrences that are subject to two key properties: temporal variability and uncertainty. When defining similarity between data objects, these properties play a significant role. In general, methods designed for single-observation data hardly apply for multi-observation data, as they are either not supported by the data models or do not provide sufficiently efficient or effective solutions. Prominent directions incorporating the key properties are the fields of time series, where data is created by temporally successive observations, and uncertain data, where observations are mutually exclusive. This thesis provides research contributions for similarity processing - similarity search and data mining - on time series and uncertain data. The first part of this thesis focuses on similarity processing in time series databases. A variety of similarity measures have recently been proposed that support similarity processing w.r.t. various aspects. In particular, this part deals with time series that consist of periodic occurrences of patterns. Examining an application scenario from the medical domain, a solution for activity recognition is presented. Finally, the extraction of feature vectors allows the application of spatial index structures, which support the acceleration of search and mining tasks resulting in a significant efficiency gain. As feature vectors are potentially of high dimensionality, this part introduces indexing approaches for the high-dimensional space for the full-dimensional case as well as for arbitrary subspaces. The second part of this thesis focuses on similarity processing in probabilistic databases. The presence of uncertainty is inherent in many applications dealing with data collected by sensing devices. Often, the collected information is noisy or incomplete due to measurement or transmission errors. Furthermore, data may be rendered uncertain due to privacy-preserving issues with the presence of confidential information. This creates a number of challenges in terms of effectively and efficiently querying and mining uncertain data. Existing work in this field either neglects the presence of dependencies or provides only approximate results while applying methods designed for certain data. Other approaches dealing with uncertain data are not able to provide efficient solutions. This part presents query processing approaches that outperform existing solutions of probabilistic similarity ranking. This part finally leads to the application of the introduced techniques to data mining tasks, such as the prominent problem of probabilistic frequent itemset mining.

1s
Dec 21, 2012
Orbifoldizing Hopf- and Nichols-Algebras

The main goal of this thesis is to explore a new general construction of orbifoldizing Hopf- and Nicholsalgebras, describe the growth of the automorphism group and compare the behaviour of certain associated categories to Kirillov's orbifoldizing. Together with outlooks towards vertex algebras these aspects form the 5-fold subdivision of this thesis. The main applications of this theory is the construction of new finite-dimensional Nichols algebras with sometimes large rank. In the process, the associated group is centrally extended and the root system is folded, as shown e.g. for E6->F4 on the title page. Thus, in some sense, orbifoldizing constructs new finite-dimensional quantum groups with nonabelian Cartan-algebra.

1s
Dec 21, 2012
Prototyping tools for hybrid interactions

In using the term 'hybrid interactions', we refer to interaction forms that comprise both tangible and intangible interactions as well as a close coupling of the physical or embodied representation with digital output. Until now, there has been no description of a formal design process for this emerging research domain, no description that can be followed during the creation of these types of interactions. As a result, designers face limitations in prototyping these systems. In this thesis, we share our systematic approach to envisioning, prototyping, and iteratively developing these interaction forms by following an extended interaction design process. We share our experiences with process extensions in the form of toolkits, which we built for this research and utilized to aid designers in the development of hybrid interactive systems. The proposed tools incorporate different characteristics and are intended to be used at different points in the design process. In Sketching with Objects, we describe a low-fdelity toolkit that is intended to be used in the very early phases of the process, such as ideation and user research. By introducing Paperbox, we present an implementation to be used in the mid-process phases for fnding the appropriate mapping between physical representation and digital content during the creation of tangible user interfaces (TUI) atop interactive surfaces. In a follow-up project, we extended this toolkit to also be used in conjunction with capacitive sensing devices. To do this, we implemented Sketch-a-TUI. This approach allows designers to create TUIs on capacitive sensing devices rapidly and at low cost. To lower the barriers for designers using the toolkit, we created the Sketch-a-TUIApp, an application that allows even novice users (users without previous coding experience) to create early instantiations of TUIs. In order to prototype intangible interactions, we used open soft- and hardware components and proposed an approach of investigating interactivity in correlation with intangible interaction forms on a higher fdelity. With our fnal design process extension, Lightbox, we assisted a design team in systematically developing a remote interaction system connected to a media façade covering a building. All of the above-mentioned toolkits were explored both in real-life contexts and in projects with industrial partners. The evaluation was therefore mainly performed in the wild, which led to the adaptation of metrics suitable to the individual cases and contexts.

1s
Dec 06, 2012
On the behavior of multiple comparison procedures in complex parametric designs

The framework for simultaneous inference by Hothorn, Bretz, and Westfall (2008) allows for a unified treatment of multiple comparisons in general parametric models where the study questions are specified as linear combinations of elemental model parameters. However, due to the asymptotic nature of the reference distribution the procedure controls the error rate across all comparisons only for sufficiently large samples. This thesis evaluates the small samples properties of simultaneous inference in complex parametric designs. These designs are necessary to address questions from applied research and include nonstandard parametric models or data in which the assumptions of classical procedures for multiple comparisons are not met. This thesis first treats multiple comparisons of samples with heterogeneous variances. Usage of a heteroscedastic consistent covariance estimation prevents an increase in the probability of false positive findings for reasonable sample sizes whereas the classical procedures show liberal or conservative behavior which persists even with increasing sample size. The focus of the second part are multiple comparisons in survival models. Multiple comparisons to a control can be performed in correlated survival data modeled by a frailty Cox model under control of the familywise error rate in sample sizes applicable for clinical trials. As a further application, multiple comparisons in survival models can be performed to investigate trends. The procedure achieves good power to detect different dose-response shapes and controls the error probability to falsely detect any trend. The third part addresses multiple comparisons in semiparametric mixed models. Simultaneous inference in the linear mixed model representation of these models yields an approach for multiple comparisons of curves of arbitrary shape. The sections on which curves differ can also be identified. For reasonably large samples the overall error rate to detect any non-existent difference is controlled. An extension allows for multiple comparisons of areas under the curve. However the resulting procedure achieves an overall error control only for sample sizes considerably larger than available in studies in which multiple AUC comparisons are usually performed. The usage of the evaluated procedures is illustrated by examples from applied research including comparisons of fatty acid contents between Bacillus simplex lineages, comparisons of experimental drugs with a control for prolongation in survival of chronic myelogeneous leukemia patients, and comparisons of curves describing a morphological structure along the spinal cord between variants of the EphA4 gene in mice.

1s
Oct 31, 2012
Biostatistical modeling and analysis of combined fMRI and EEG measurements

The purpose of brain mapping is to advance the understanding of the relationship between structure and function in the human brain. Several techniques---with different advantages and disadvantages---exist for recording neural activity. Functional magnetic resonance imaging (fMRI) has a high spatial resolution, but low temporal resolution. It also suffers from a low-signal-to-noise ratio in event-related experimental designs, which are commonly used to investigate neuronal brain activity. On the other hand, the high temporal resolution of electroencephalography (EEG) recordings allows to capture provoked event-related potentials. Though, 3D maps derived by EEG source reconstruction methods have a low spatial resolution, they provide complementary information about the location of neuronal activity. There is a strong interest in combining data from both modalities to gain a deeper knowledge of brain functioning through advanced statistical modeling. In this thesis, a new Bayesian method is proposed for enhancing fMRI activation detection by the use of EEG-based spatial prior information in stimulus based experimental paradigms. This method builds upon a newly developed mere fMRI activation detection method. In general, activation detection corresponds to stimulus predictor components having an effect on the fMRI signal trajectory in a voxelwise linear model. We model and analyze stimulus influence by a spatial Bayesian variable selection scheme, and extend existing high-dimensional regression methods by incorporating prior information on binary selection indicators via a latent probit regression. For mere fMRI activation detection, the predictor consists of a spatially-varying intercept only. For EEG-enhanced schemes, an EEG effect is added, which is either chosen to be spatially-varying or constant. Spatially-varying effects are regularized by different Markov random field priors. Statistical inference in resulting high-dimensional hierarchical models becomes rather challenging from a modeling perspective as well as with regard to numerical issues. In this thesis, inference is based on a Markov Chain Monte Carlo (MCMC) approach relying on global updates of effect maps. Additionally, a faster algorithm is developed based on single-site updates to circumvent the computationally intensive, high-dimensional, sparse Cholesky decompositions. The proposed algorithms are examined in both simulation studies and real-world applications. Performance is evaluated in terms of convergency properties, the ability to produce interpretable results, and the sensitivity and specificity of corresponding activation classification rules. The main question is whether the use of EEG information can increase the power of fMRI models to detect activated voxels. In summary, the new algorithms show a substantial increase in sensitivity compared to existing fMRI activation detection methods like classical SPM. Carefully selected EEG-prior information additionally increases sensitivity in activation regions that have been distorted by a low signal-to-noise ratio.

1s
Oct 31, 2012