Showed that for various optimization criteria, the optimal polygon containing k of n points must be near one of the points, hence one can transform time bounds involving several factors of n to bounds linear in n but polynomial in k. Used as a subroutine are data structures for finding several nearest neighbors in rectilinear metric spaces, and algorithms for finding the deepest point in an arrangement of cubes or spheres.
Shows that bichromatic nearest neighbors can be maintained under point insertion and deletion essentially as quickly as known solutions to the post office problem, and that the minimum spanning tree can be maintained in the same time except for an additive O(sqrt n) needed for solving the corresponding graph problem. TR 92-88's title was actually "Fully dynamic maintenance of Euclidean minimum spanning trees and maxima of decomposable functions". TR 92-05's title left out the part about maxima; that version gave a slower algorithm superseded by the result in 92-88.
This conference paper merged my results from "Dynamic Euclidean minimum spanning trees" with results of my co-authors on nearest neighbors and halfspace range searching.
Combines a method from "Provably good mesh generation" for finding sparse high-dimensional Delaunay triangulations, a method of Dickerson, Drysdale, and Sack ["Simple algorithms for enumerating interpoint distances", IJCGA 1992] for using Delaunay triangulations to search for nearest neighbors, and a method of Frederickson for speeding up tree-based searches. The results are fast algorithms for several proximity problems such as finding the k nearest neighbors to each point in a given point set.
A parallelization of the quadtree constructions in "Provably good mesh generation", in an integer model of computation, based on a technique of sorting the input points using values formed by shuffling the binary representations of the coordinates. A side-effect is an efficient construction for the "fair split tree" hierarchical clustering method used by Callahan and Kosaraju for various nearest neighbor problems.
Any connected nearest neighbor forest with diameter D has O(D6) vertices. This was later further improved to O(D5) and merged with results of Paterson and Yao into "On nearest neighbor graphs".
Paterson and Yao presented a paper at ICALP showing among other things that any connected nearest neighbor forest with diameter D has O(D9) vertices. This paper is the journal version; my contribution consists of improving that bound to O(D5), which is tight.
This paper shows how to use my dynamic closest pair data structure from "Dynamic Euclidean minimum spanning trees" for some non-geometric problems including hierarchical clustering, greedy matching, and TSP heuristics. Experiments show variants of my data structures to be faster than previously used heuristics.
(Source code and experimental data – SODA paper – JEA home page)
We use my dynamic closest pair data structure from "Dynamic Euclidean minimum spanning trees" to detect collisions among a collection of moving objects in sublinear time per collision. As one application, we can construct the straight skeleton of Aichholzer et al (and the mitered offset curves from which it is defined) in subquadratic time.
(Jeff's publications page and copy of the journal version)
We modify my previous data structures for dynamic closest pairs, to use a lazy deletion mechanism, and show in experiments that the results are an improvement on the unmodified structures.
Motivated by redistricting, we consider geometric variants of the stable matching problem in which points (such as the pixels of a discretization of the unit square) are to be matched to a smaller number of centers such that each center has the same number of matches and no match is unstable with respect to Euclidean distances. We show how to solve such problems in polylogarithmic time per matched point, experiment with practical heuristics for solving these problems, and test methods for moving the centers to improve the shape of the matched regions.
We cluster road networks (modeled as planar graphs, or more generally as graphs obeying a separator theorem) with a given set of cluster centers, by matching graph vertices to centers stably according to distance: no unmatched vertex and center should have smaller distances than the matched pairs for the same points. We provide a separator-based data structure for dynamic nearest neighbor queries in planar or separated graphs, which allows the optimal stable clustering to be constructed in time O(n3/2log n). We also experiment with heuristics for fast practical construction of this clustering.
We develop data structures for solving nearest neighbor queries for dynamic subsets of vertices in a planar graph, or more generally for a graph in any graph class with small separators (polynomial expansion).
The stable-matching Voronoi diagram of a collection of point sites in the plane, each with a specified area, is a collection of disjoint regions of the plane, one for each site and having the specified area, so that no pair of a point and a site are closer to each other than to the farthest other site and point that they may be matched to. We prove nearly-matching upper and lower bounds on the combinatorial complexity of these diagrams and provide algorithms that can compute them in a polynomial number of primitive steps.
We survey the results from several of my earlier papers: Algorithms for stable matching and clustering in a grid, Defining equitable geographic districts in road networks via stable matching, Reactive proximity data structures for graphs, and Stable-matching Voronoi diagrams: combinatorial complexity and algorithms.
(Slides)
We apply the nearest-neighbor chain algorithm to repeatedly find pairs of mutual nearest neighbors for different distances, speeding up the times for the multi-fragment TSP heuristic, motorcycle graphs, straight skeletons, and other problems.
The nearest-neighbor classification problem considered here takes as input a training set of points in a Euclidean space, each with a classification from some finite set of classes or colors, and then uses that input to predict the classification of new points as being equal to that of the nearest neighbor in the input training set. A training point is irrelevant when removing it from the training set would produce the same predicted classification for all possible new points that might be queried. We describe how to find all of the relevant points, in polynomial time, using a simple algorithm whose only components are construction of a minimum spanning tree of the training set and the computation of extreme points (convex hull vertices) of geometrically transformed subsets of points. For any constant dimension, with \(k\) relevant points resulting from a training set of \(n\) points, this method can be made to take time \(O(n^2+k^2n)\), using only simple algorithms for the minimum spanning tree and extreme point subroutines. For small dimensions, somewhat better but more complicated bounds are possible.
Geometry – Publications – David Eppstein – Theory Group – Inf. & Comp. Sci. – UC Irvine
Semi-automatically filtered from a common source file.