Skip to main content

Abstracts

Durham University

School of Engineering and Computing Sciences

Mumtaz Ahmad (Nancy University)

Sequential computation of mappings over the field and set of integers

A permanent challenge in computer science consists in increasing the performances of computations and the speed of processors. A computer decomposes a computation in elementary operations on elementary objects. For instance, a 32 bits processor can only perform operations on 32 bits, and any transformation of a data structure must be decomposed in successive operations on 32 bits. To exchange the contents of registers, the usual solution to ensure the completeness of the computation is to make copies from the initial data. But this solution can generate some memory errors when the structures are too large, or at least decreases the performances of the computations. Indeed, such operations involving several registers in a micro-processor, through a compiler or an electronic circuit, will have to make copies of some registers in the cache memory or in RAM, with a loss of speed, or to duplicate signals in the chip design itself, with an extra power consumption.

We design algorithms to compute mappings from a set \(S^{n}\) to itself by sequence of linear assignments. This sequential computation requires no extra variables other than the variables available as input. We can equally state that we perform an arbitrary transformation of objects by successive elementary local transformations inside these objects only with respect to their successive states. Since no extra writing memory is required in such in situ design of computations, it finds applications in optimization of programs and chip design with respect to the number of variables. Such in situ computations make it possible to improve performance of calculations and to calculate an operation related to \(n\) registers by a sequence of assignments using only these \(n\) registers.

In this talk, we claim that such in situ design of computations exists in the case of field as well as over the set of integers. We implement the in situ sequential algorithms and compute such mappings successfully. All in situ programs considered throughout this approach operate on consecutive components, traversing the list of all indices, possibly several times in forward or backward order.


Chiranjit Chakraborty (University of Edinburgh)

Instance compression for the polynomial hierarchy and beyond

We define instance compressibility for parametric problems in PH and PSPACE. We observe that the problem \(\Sigma_{i}CircuitSAT\) of deciding satisfiability of a quantified Boolean circuit with \(i-1\) alternations of quantifiers starting with an existential quantifier is complete for parametric problems in \(\Sigma_{i}^{p}\) with respect to W-reductions. Analogously, the problem QBCSAT (Quantified Boolean Circuit Satisfiability) is complete for parametric problems in PSPACE with respect to W-reductions. We show the following results about these problems:

  1. \(CircuitSAT\) is non-uniformly compressible within NP, implies that \(\Sigma_{i}CircuitSAT\) is non-uniformly compressible within NP, for any \(i \geq 1\).
  2. If \(QBCSAT\) is non-uniformly compressible (or even if satisfiability of quantified Boolean CNF formulae is non-uniformly compressible), then \(PSPACE \subseteq NP/poly\) and PH collapses to the third level.

Similar results hold if we allow a small amount of error in the compression.


Amin Coja-Oghlan (University of Warwick)

Phase transitions and computational complexity

Phase transitions have long been studied in statistical mechanics in the context of disordered systems such as glasses. Indeed, physicists have developed ingenious, albeit non-rigorous, techniques for the study of phase transitions. In recent years it has emerged that phase transitions also play a key role in computer science. In this talk I am going to give an overview of this exciting area, and explain how ideas from statistical mechanics shed light on the mystery of computational complexity. In addition, I am going to survey the recent progress in turning the statistical mechanics work into a rigorous theory.


Paul Haynes (Sheffield Hallam)

Dynamic graph-based search in unknown environments

The autonomous exploration of an (unknown) environment by a team of robots has recently gained significant attention. Global self-localisation is the associated key problem, which is still far from being solved in the general setting. When a multi-robot team is involved, the problem is further aggravated as, in general, robots do not know the relative positions of each other and a sophisticated control architecture is required. Usually, the so-called multi-SLAM approach is adopted to deal with uncertainty, which may be computationally expensive.

The proposed graph theoretical approach addresses both exploration by building a map of the environment in the form of a geometric graph and multi-robot control architecture by coordinating the robot team in a structured yet adaptive manner. Underpinning this is the principle of localisation by measurement of a moving robot (or entity) relative to two stationary entities. This naturally leads to representing a clique of three entities (3-clique) by an induced sub-graph clique of the infinite triangular grid graph \(T^\infty\). Each node of this localisation graph \(L\) represents the location of an entity in the environment. Immediately surrounding nodes are appended to \(L\) by the connecting of edges if the entity scanning device (e.g. a laser) does not detect an obstacle. Movement throughout \(L\) is achieved by one entity moving to an unoccupied node in the periphery of the 3-clique. As entities move \(L\) is extended accordingly. In addition, a movement graph \(M\) is considered which is dual to \(L\). Thus, \(L\) serves to maintain a topological environment view and navigability, whilst \(M\) represents manoeuvrability from which a suitable algorithm identifies a (possibly) optimal walk. Both graphs are dynamic since (i) a walk is computed within \(M\), (ii) entities are physically moved, and (iii) entity sensors extend graphs \(L\) and \(M\). This repeats until the whole environment is discovered. An (optimal) walk within \(M\) is computed by a hamiltonian spanning walk algorithm developed by the authors. The dynamic construction of the graph permits a certain amount of intelligent behaviour, e.g. a walk may include a path whose nodes may be classed as visited since their surrounding nodes are visited. A simple depth first search can identify and remove such regions. The algorithm is \(O(n^2)\).


Bill Jackson (Queen Mary University of London)

Constructing long cycles in graphs

I will survey algorithmic results concerning the construction of a long cycle or large Eulerian subgraph in a graph \(G=(V,E)\). I will also outline polynomial algorithms due to Bilinski, Ma, Yu and myself which construct an Eulerian subgraph with at least \(|E|^{0.75}\) edges when \(G\) is \(3\)-edge-connected and a cycle with at least \(|V|^{0.75}\) vertices when \(G\) is 3-connected and claw-free.


Ross Kang (Durham University)

Induced matchings in subcubic planar graphs

We present a linear-time algorithm that, given a planar graph with \(m\) edges and maximum degree 3, finds an induced matching of size at least \(m/9\). This is best possible.


Dieter Kratsch (University of Metz)

Exact exponential algorithms

Today most computer scientists believe that NP-hard problems cannot be solved by polynomial time algorithms. From the polynomial-time perspective all NP-complete problems are equivalent but their exponential-time properties vary widely. Why do some NP-hard problems seem to be easier than others? What are the algorithmic techniques for solving hard problems significantly faster than the exhaustive brute-force search, trying all possible solutions? The algorithms addressing these questions are known as exact exponential algorithms. The history of exact exponential algorithms for NP-hard problems dates back to the 1960s. Two classical examples are Bellman, Held and Karp’s dynamic programming algorithm for the Traveling Salesman problem and Ryser’s inclusion-exclusion formula for the permanent. The design and analysis of exact algorithms leads to a better understanding of hard problems and initiates interesting new combinatorial and algorithmic challenges. The last decade has witnessed a rapid development of the area, with many new algorithmic techniques discovered. This has transformed exact algorithms into a very active research field. This talk provides an introduction to the area and explains some of the fundamental algorithmic techniques.


Christian Lavault (Universite Paris 13)

A note on an extension of Prufer’s encoding for a forest of uniform hypertrees

Given a hypergraph with \(n\) vertices, the present note is designing encoding and decoding algorithms (Algorithms 1 and 2) for a forest of rooted uniform hypertrees and hypercycles in linear time, by using as little as \(n – 2\) integers in the range \([1,n]\).

Algorithms 1 and 2, respectively, are used to encode and decode forests of rooted hypertrees, which allows to enumerate these structures by using a generalization of the Prufer sequences. Indeed, the number of forests of rooted hypertrees is a key knowledge, which makes it possible to obtain a concise and simple description of the structures; more precisely, a recursive pruning of the leaves makes the description simpler. The encoding and decoding algorithms designed in the note are a simple extension of the classical Prufer code for rooted trees to an encoding for forests of rooted uniform hypertrees and hypercycles. Their enumeration, according to their number of vertices, hyperedges and hypertrees, is then easily derived from the one-to-one correspondence constructed in Algorithm 1. One of the advantages offered by a bijective proof is the possibility of performing a random generation and learn some characteristic properties of the forest constructed. Thus, in passing, an obvious generalization of Cayley’s enumeration of rooted trees (`Cayley’s formula’) is given, as well as an alternative proof to the explicit expression of the number of hypercycles (found by Selivanov in the early 70’s).


Barnaby Martin (Durham University)

The complexity of positive equality-free first-order logic

We study the complexity of evaluating positive equality-free first-order sentences on a fixed, finite structure \(B\). Through an algebraic approach, involving surjective hyper-operations on the set \(B\), we uncover various complexity tetrachotomies. Specifically, sets of problems are shown to be either in Logspace, NP-complete, co-NP-complete or Pspace-complete.


Angelica Pachon (University of Warwick)

The decimation process in random k-SAT

It is well known that for densities (clause/variable ratios) up to \(r_k=2^k ln 2-O(k)\) a typical random \(k\)-SAT formula is satisfiable. In spite of decades of research, no algorithm has emerged to find a satisfying assignment for densities close to \(r_k\) in polynomial time w.h.p. Different algorithms are known to fail for asymptotically even smaller densities \(r>r_k \frac{\rho}{k}\), where \( \rho > 0\) is a constant (independent of \(k\)) that depends on the sophistication of the algorithm. The best current polynomial time algorithm fails for \(r_k\frac{\ln k}{k}\) w.h.p. Using statistical mechanics techniques another kind of algorithms known as message passing algorithms have arisen. Experiments indicate that such kind of algorithms solve random \(k\)-CNFs in polynomial time for densities very close to \(r_k\), at least for \(k=3,4,5\). Those algorithms are Belief Propagation guided decimation (BPdec) and Survey Propagation (SPdec) guided decimation, which have come to be considered the only plausible candidates for overcoming the \(r_k\frac{\ln k}{k}\) barrier.

In this work we analyze the space of satisfying assignments when \(t=\alpha n\) variables have been fixed, assuming that those values are equal to the values of a known solution \( \sigma\), with \(0 < \alpha < 1\). This problem is motived by the experiment on which BPdec is based, which proceeds by assigning one variable at each time, approximating at each step \(i\) the probability of \(X_{(i)}\) given that \(i-1\) variables are already known.

We use the first moment method and find an upper bound for the distance (with respect to the no fix variables) between any other solution \( \tau\) and \( \sigma\). Such distance goes exponentially fast to zero for densities \( r>(1-\alpha)\ln2(2^k-1)\). Observe that those densities are farther of the satisfiability threshold as the number of fix variables increase.


Sarah Rees (University of Newcastle)

Using automata in geometric group theory

I am a pure mathematician, working in group theory, using automata and formal languages as tools. My aim in this lecture is to explain how and why I am using them, defining any technical mathematical terms I use. I first learned about formal languages and automata working with topologist Epstein and group theorist Holt to develop algorithms for and theory of automatic groups. This theory exploits properties of discrete hyperbolic geometry that can be represented by finite state automata, and so harnessed for easy computation with many 3-D manifolds. More generally, the theory of automata and formal languages allows analysis of the complexity of normal forms, and of the word problem and other decision problems for groups and related algebraic structures.


Rahul Santhanam (University of Edinburgh)

Beating brute force search for formula satisfiability and QBF validity

Brute force search algorithms for satisfiability and quantified Boolean formula validity run in time \(2^{n}poly(m)\), where \(n\) is the number of variables and \(m\) is the size of the formula. In the field of exact algorithms for SAT, the goal is to design algorithms for these problems running in time \(2^{n – f(n)}\), where \(f(n)\) (the savings of the algorithm over brute force search) is asymptotically as large as possible. I will first briefly review known results for satisfiability in this setting. I will then present two new results – one on formula satisfiability and one on QBF validity. The algorithm for formula satisfiability runs in time \(2^{n – \Omega(n)}\) on formulae of linear size – this is the first non-trivial result for satisfiability on general Boolean formulae. The algorithm itself is simple and deterministic. The main contribution is in the analysis, which makes use of the method of random restrictions, more commonly used in proving lower bounds. I will discuss further potential applications of this analytical tool, and make some general observations on connections between lower bound techniques and algorithmic analysis. The algorithm for QBF validity runs in time \(2^{n – \Omega(n/log(n)}\) on formulae with a bounded number of variable occurrences. I will define a new notion of “structured” instance which might be useful in other contexts, and show that the algorithm can be modified to run in time \(2^{n – \Omega(n)}\) on structured instances.


Manuel Sorge (Friedrich-Schiller-Universitat)

A parameterized view on Golomb ruler construction

Golomb rulers find applications in positioning of radio channels, radio astronomy, communication networks, and bioinformatics. Golomb rulers are special types of rulers where for any two marks it holds that the distance between them is unique. In other words, a Golomb ruler is defined by a subset \(R\) of the nonnegative integers such that no two pairs of elements from \(R\) have the same distance. For instance, it is well-known that every Golomb ruler with \(n > 4\) marks has to have total length greater than \(n(n – 1)/2\). Erdos conjectured an upper bound of \(n^2 + O(1)\), which is still open. Thus, a natural optimization problem is to construct a Golomb ruler with \(n\) marks having minimum length. Indeed, there are several further optimization and decision problems related to the construction of Golomb rulers; unfortunately, most of these problems turned out to be computationally hard, mostly NP-hard. The computational hardness and the practical relevance of Golomb ruler construction has also led to a number of computational approaches towards this problem, including implementations of exhaustive and local search heuristics and the use of globally distributed computer systems. For instance, minimum-length Golomb rulers for up to 26 marks are known thanks to an enormous amount of computation time.

Our contribution is to initiate a study from a parameterized complexity perspective concerning the design and analysis of algorithms for Golomb ruler construction. In particular, we study a hypergraph characterization for Golomb ruler construction, observe several combinatorial properties of these hypergraphs, and derive effective (polynomial-time) data reduction rules employing this characterization. More specifically, we provide a cubic-size problem kernel for the problem to delete a minimum number of marks from a given ruler such that the remaining marks form a Golomb ruler (Golomb Subruler)—the parameter is the number of deleted marks. We implemented and tested the data reduction rules (in the sense of a proof of concept), indicating their practical relevance. Finally, we provide a simplified NP-hardness result for Golomb Subruler, which also implies the W[1]-hardness of a somewhat modified Golomb ruler problem.


Alexander Tiskin (University of Warwick)

Approximate matching in grammar-compressed strings

A grammar-compressed (GC) string is a string generated by a context-free grammar. This compression model includes LZ78 and LZW compression as a special case. We consider the longest common subsequence problem and the local subsequence recognition problem on a GC-text against a plain pattern. We show that, surprisingly, both problems can be solved in time that is within a polylogarithmic factor of the best existing algorithms for the same problems on a plain text. Using these results, we then give an efficient algorithm for threshold approximate matching on a GC-text against a plain pattern. Our algorithm improves on the best existing algorithm for this problem, unless the threshold parameter is very small.


Alexei Vernitski (University of Essex)

Astral graphs as a generalisation of star graphs and decomposing graphs into astral graphs

Astral graphs are a generalisation of star graphs. We call a graph astral if there is a linear ordering of its set of vertices \(V\) such that for each three pairwise distinct vertices \(u,v,w\in V\) if \(u < v\) and \(v\) is adjacent to \(w\) then \(u\) is adjacent to \(w\). This property can be re-formulated in the following, more intuitive way: there is a linear ordering of the graph’s set of vertices such that in the adjacency matrix, non-zero entries in each row (column) form an initial segment of that row (column).

The preferential attachment model is a model for generating graphs which has many applications, for example, in molecular biology. New vertices are added one by one. Every time a vertex is added, it chooses to be adjacent to more “popular” vertices (that is, vertices with more edges incident on them) with a higher probability than to less “popular” vertices. Astral graphs are inspired by this model.

Let us call the minimal number of astral graphs into which a graph can be decomposed the astral decomposition number of the graph. Finding the exact value of the astral decomposition number of a graph might be an intractable problem. However, in our computational experiments, it turns out that a simple greedy algorithm decomposing a graph into astral graphs shows very consistent results, therefore, we are using the value generated by the greedy algorithm as the value of the astral decomposition number.

We define the astral index of a graph as its astral decomposition number divided by its number of nodes. Thus, an astral graph would have an astral index close to 0. In our computational experiments, we have shown that graphs generated by the same model tend to have the same value of the astral index. For instance, random graphs with approximately as many edges as vertices have an astral index 0.4, and graphs with approximately as many edges as vertices generated by the preferential attachment model have an astral index 0.3 (irrespectively of the size of the graph).


Standa Zivny (University of Oxford)

Hybrid tractability of soft constraint problems

The constraint satisfaction problem (CSP) is a central generic problem in computer science and artificial intelligence: it provides a common framework for many theoretical problems as well as for many real-life applications. Considerable effort has been made in identifying properties which ensure tractability in such problems. In this work, we study hybrid tractability of soft constraint problems; that is, properties which guarantee tractability of the given soft constraint problem, but which do not depend only on the underlying structure of the instance (such as being tree-structured) or only on the types of soft constraints in the instance (such as submodularity).

We present a novel tractable class of soft constraint problems satisfying the so-called joint-winner property. This class includes a machine scheduling problem, constraint problems of arbitrary arities with no overlapping nogoods, and the SoftAllDiff constraint with arbitrary unary soft constraints. Apart from giving an efficient algorithm for the new class, we also show that it is maximal; that is, any extension is NP-hard. An important tool in our investigation will be the notion of forbidden substructures.