Numerical Properties of GDD

Previous chapters have introduced graph diffusion distance and explored some of its theoretical properties, as well as demonstrating an efficient method for computing GDD. This chapter presents several experiments, using the machinery developed in Chapter 4 to explore numerical properties of GDD as well as numerical properties of Algorithm [alg:linear_alg].

Graph Lineages

In this section we introduce several specific graph lineages for which we will compute various intra- and inter-lineage distances. Three of these are well-known lineages of graphs, and the fourth is defined in terms of a product of complete graphs:

Path Graphs (Pan\text{Pa}_n): 1D grid graphs of length nn, with aperiodic boundary conditions.

Cycle Graphs (Cyn\text{Cy}_n): 1D grid graphs of length nn, with periodic boundary conditions.

Square Grid Graphs (Sqn\text{Sq}_n): 2D grid graphs of dimensions nn, with aperiodic boundary conditions. Sqn=PanPan\text{Sq}_n = \text{Pa}_n \Box \text{Pa}_n

“Multi-Barbell” Graphs (Ban\text{Ba}_n): Constructed as CynKn\text{Cy}_n \Box K_n, where KnK_n is the complete graph on nn vertices.

These familes are all illustrated in Figure [fig:graph_families].

2D Grids
Graph lineages used in multiple numerical experiments in the main text.
Paths
Graph lineages used in multiple numerical experiments in the main text.
Cycles
Graph lineages used in multiple numerical experiments in the main text.
k-Barbell graphs
Graph lineages used in multiple numerical experiments in the main text.

[fig:graph_families]

Additionally, some examples distances between elements of these graph lineages are illustrated in Figure [fig:gdist_plots]. In these tables we see that in general intra-lineage distances are small, and inter-lineage distances are large.

Distances D^2(G, H) calculated for several pairs of graphs. The top plot shows distances where G and H are both chosen from \{ \text{Grid}_{13 \times 13}, P_{169}, C_{169}, \text{Ba}_{13} \}. At bottom, distances are calculated from G chosen in \{ \text{Grid}_{12 \times 12}, P_{144}, C_{144}, \text{Ba}_{12} \} to H chosen in \{ \text{Grid}_{13 \times 13}, P_{169}, C_{169}, \text{Ba}_{13} \}. As expected, diagonal entries are smallest.
Distances D^2(G, H) calculated for several pairs of graphs. The top plot shows distances where G and H are both chosen from \{ \text{Grid}_{13 \times 13}, P_{169}, C_{169}, \text{Ba}_{13} \}. At bottom, distances are calculated from G chosen in \{ \text{Grid}_{12 \times 12}, P_{144}, C_{144}, \text{Ba}_{12} \} to H chosen in \{ \text{Grid}_{13 \times 13}, P_{169}, C_{169}, \text{Ba}_{13} \}. As expected, diagonal entries are smallest.

[fig:gdist_plots]

Numerical Optimization Methods

We briefly discuss here the other numerical methods we have used to calculate D̃2\tilde{D}^2 and D2D^2. In general we have found these methods inferior to the algoithm presented in Chapter 4, but we present them here for completeness.

Nelder-Mead in Mathematica For very small graph pairs (n1×n2100n_1 \times n_2 \leq 100) we are able to find optimal P,α,tP, \alpha, t using constrained optimization in Mathematica 11.3 (Inc., n.d.) using NMinimize, which uses Nelder-Mead as its backend by default. The size limitation made this approach unusable for any real experiments.

Orthogonally Constrained Opt. We also tried a variety of codes specialized for numeric optimization subject to orthogonality constraints. These included (1) the python package PyManopt (Townsend, Koep, and Weichwald 2016), a code designed for manifold-constrained optimization; (2) gradient descent in Tensorflow using the penalty function g(P)=c||PTPI||Fg(P) = c {\left| \left| P^T P - I \right| \right|}_F (with c1c \ll 1 a small positive constant weight) to maintain orthogonality, as well as (3) an implementation of the Cayley reparametrization method from (Wen and Yin 2013) (written by the authors of that same paper). In our experience, these codes were slower, with poorer scaling with problem size, than combinatorial optimization over subpermutation matrices, and did not produce improved results on our optimization problem.

Black-Box Optimization Over α\alpha.

We compare in more detail two methods of joint optimization over α\alpha and PP when PP is constrained to be a subpermutation matrix in the diagonal basis for L(G1)L(G_1) and L(G2)L(G_2). Specifically, we compare our approach given in Algorithm [alg:linear_alg] to univariate optimization over α\alpha, where each function evaluation consists of full optimization over PP. Figure [fig:alg_timing] shows the results of this experiment. We randomly sample pairs of graphs as follows:

  1. n1n_1 is drawn uniformly from [5,120][ 5, 120].

  2. n2n_2 is drawn uniformly from [n1,n1+60][n_1, n_1 + 60 ].

  3. G1G_1 and G2G_2 are generated by adding edges according to a Bernoulli distribution with probability pp. We ran 60 trials for each pp in { .125, .25, .375, .5, .625, .75, .875 }.

We compute the linear version of distance for each pair. Because our algorithm finds all of the local minima as a function of alpha, we compute the cost of the golden section approach as the summed cost of multiple golden section searches in alpha: one GS search starting from the initial bracket [0.618α*,1.618α*][0.618 \alpha^*, 1.618 \alpha^*] for each local minimum α*\alpha^* found by our algorithm. We see that our algorithm is always faster by at least a factor of 10, and occasionally faster by as much as a factor of 10310^3. This can be attributed to the fact that the golden section search is unaware of the structure of the linear assignment problem: it must solve a full n2×n2n_2 \times n_2 linear assignment problem for each value of α\alpha it explores. In contrast, our algorithm is able to use information from prior calls to the LAP solver, and therefore solves a series of LAP problems whose sizes are monotonically nonincreasing.

Comparison of runtimes for our algorithm and bounded golden section search over the same interval [ 10^{-6}, 10]. Runtimes were measured by a weighted count of evaluations of the Linear Assignment Problem solver, with an n \times n linear assignment problem counted as n^3 units of cost. Because our algorithm recovers the entire lower convex hull of the objective function as a function of \alpha, we compute the cost of the golden section search as the summed cost of multiple searches, starting from an interval bracketing each local optimum found by our algorithm. We see that our algorithm is much less computationally expensive, sometimes by a factor of 10^{3}. The most dramatic speedup occurs in the regime where n_1 << n_2. Graphs were generated by drawing n_1 uniformly from [5,120], drawing n_2 uniformly from [n_1, n_1 + 60], and then adding edges according to a Bernoulli distribution with p in { .125, .25, .375, .5, .625, .75, .875 } (60 trials each).
Comparison of runtimes for our algorithm and bounded golden section search over the same interval [106,10][ 10^{-6}, 10]. Runtimes were measured by a weighted count of evaluations of the Linear Assignment Problem solver, with an n×nn \times n linear assignment problem counted as n3n^3 units of cost. Because our algorithm recovers the entire lower convex hull of the objective function as a function of α\alpha, we compute the cost of the golden section search as the summed cost of multiple searches, starting from an interval bracketing each local optimum found by our algorithm. We see that our algorithm is much less computationally expensive, sometimes by a factor of 10310^{3}. The most dramatic speedup occurs in the regime where n1<<n2n_1 << n_2. Graphs were generated by drawing n1n_1 uniformly from [5,120][5,120], drawing n2n_2 uniformly from [n1,n1+60][n_1, n_1 + 60], and then adding edges according to a Bernoulli distribution with pp in { .125, .25, .375, .5, .625, .75, .875 } (60 trials each).

[fig:alg_timing]

Triangle Inequality violation of DD (Exponential Distance) and D̃\tilde{D} (Linear Distance)

As stated in Section 2.3, our full graph dissimilarity measure does not necessarily obey the triangle inequality. In this section we systematically explore conditions under which the triangle inequality is satisfied or not satisfied. We generate triplets G1,G2,G3G_1, G_2, G_3 of random graphs of sizes nin_i for n1[5,30]n_1 \in [5, 30], n2[n1,n1+30]n_2 \in [n_1, n_1 + 30], and n3[n2,n2+30]n_3 \in [n_2, n_2 + 30] by drawing edges from the Bernoulli distribution with probability pp (we perform 4500 trials for each pp value in [.125, .25, .375, .5, .625, .75, .875]). We compute the distance D̃(Gi,Gk)\tilde{D}(G_i, G_k) (for (i,k){(1,3),(1,2),(2,3)}(i,k) \in \{ (1,3), (1,2), (2,3) \}). The results may be seen in Figure [fig:triangle_ineq_histogram]. In this figure we plot a histogram of the “discrepancy score” Disc(G1,G2,G3)=D̃(G1,G3)/(D̃(G1,G2)+D̃(G2,G3)),\begin{aligned} \text{Disc}(G_1, G_2, G_3) = \tilde{D}(G_1, G_3)/(\tilde{D}(G_1, G_2) + \tilde{D}(G_2, G_3)),\end{aligned} which measures the degree to which a triplet of graphs violates the triangle inequality (i.e. falls outside of the unit interval [0,1]), for approximately 3×1043 \times 10^4 such triplets. It is clear that, especially for the linear definition of the distance, the triangle inequality is not always satisfied. However, we also observe that (for graphs of these sizes) the discrepancy score is bounded: no triple violates the triangle inequality by more than a factor of approximately 1.8. This is shown by the histogram of discrepancies in Figure [fig:triangle_ineq_histogram]. Additionally, the triangle inequality is satisfied in 28184 (95.2%) of cases.

We see similar but even stronger results when we run the same experiment with D2D^2 instead of D̃2\tilde{D}^2; these may also be seen in Figure [fig:triangle_ineq_histogram]. We calculated the discrepancy score analogously, but with DD substituted for D̃\tilde{D}. We see similarly that the degree of violation is bounded. In this case, no triple violated the triangle inequality by a factor of more than 5, and in this case the triangle inequality was satisfied in 99.8% of the triples. More work is needed to examine this trend; in particular, it would be interesting to examine whether other models of graph generation also satisfy the triangle inequality up to this constant. In both of these cases, the triangle inequality violations may be a result of our optimization procedure finding local minima/maxima for one or more of the three distances computed. We also repeat the above procedure for the same triplets of graphs, but with distances computed not in order of increasing vertex size: calculating Disc(G2,G1,G3)\text{Disc}(G_2, G_1, G_3) and Disc(G3,G2,G1)\text{Disc}(G_3, G_2, G_1). All of these results are plotted in Figure [fig:triangle_ineq_histogram].

Histograms of triangle inequality violation. These plots show the distribution of \text{Disc}(G_1, G_2, G_3), as defined in the text, for the cases (a) top: the linear or small-time version of distance and (b) bottom: the exponential or arbitrary-time version of distance. We see that for the sizes of graph we consider, the largest violation of the triangle inequality is bounded, suggesting that our distance measure may be an infra-\rho-pseudometric for some value of \rho \approx 1.8 (linear version) or \rho \approx 5.0 (exponential version). See Table [tab:dist_versions] for a summary of the distance metric variants introduced in this paper. We also plot the same histogram for out-of-order (by vertex size) graph sequences: \text{Disc}(G_2, G_1, G_3) and \text{Disc}(G_3, G_2, G_1). Each plot has a line at x=1, the maximum discrepancy score for which the underlying distances satisfy the triangle inequality.
Histograms of triangle inequality violation. These plots show the distribution of \text{Disc}(G_1, G_2, G_3), as defined in the text, for the cases (a) top: the linear or small-time version of distance and (b) bottom: the exponential or arbitrary-time version of distance. We see that for the sizes of graph we consider, the largest violation of the triangle inequality is bounded, suggesting that our distance measure may be an infra-\rho-pseudometric for some value of \rho \approx 1.8 (linear version) or \rho \approx 5.0 (exponential version). See Table [tab:dist_versions] for a summary of the distance metric variants introduced in this paper. We also plot the same histogram for out-of-order (by vertex size) graph sequences: \text{Disc}(G_2, G_1, G_3) and \text{Disc}(G_3, G_2, G_1). Each plot has a line at x=1, the maximum discrepancy score for which the underlying distances satisfy the triangle inequality.
Histograms of triangle inequality violation. These plots show the distribution of \text{Disc}(G_1, G_2, G_3), as defined in the text, for the cases (a) top: the linear or small-time version of distance and (b) bottom: the exponential or arbitrary-time version of distance. We see that for the sizes of graph we consider, the largest violation of the triangle inequality is bounded, suggesting that our distance measure may be an infra-\rho-pseudometric for some value of \rho \approx 1.8 (linear version) or \rho \approx 5.0 (exponential version). See Table [tab:dist_versions] for a summary of the distance metric variants introduced in this paper. We also plot the same histogram for out-of-order (by vertex size) graph sequences: \text{Disc}(G_2, G_1, G_3) and \text{Disc}(G_3, G_2, G_1). Each plot has a line at x=1, the maximum discrepancy score for which the underlying distances satisfy the triangle inequality.

[fig:triangle_ineq_histogram]

Intra- and Inter-Lineage Distances

We compute pairwise distances for sequences of graphs in the graph lineages displayed in Figure [fig:graph_families]. For each pair of graph families (Square Grids, Paths, Cycles, and Multi-Barbells), we compute the distance from the iith member of one lineage to the (i+1)(i+1)-st member of each other lineage, and take the average of the resulting distances from i=1i=1 to i=12i=12. These distances are listed in Table [tab:dist_table]. As expected, average distances within a lineage are smaller than the distances from one lineage to another.

Mean distances between graphs in several lineages. For two lineages G1,G2G_1, G_2 \ldots (listed at left) and H!,H2,H_!, H_2, \ldots (listed at the top), each entry shows the mean distance D(Gi,Hi+1)D(G_i, H_{i+1}) (where the average is taken over i=1i=1 to 1212). As expected, we see that the distance from elements of a graph lineage to other members of the same lineage (the diagonal entries of the table) is smaller than distances taken between lineages. Furthermore as expected, 1D paths are more similar (but not equal) to 1D cycles than to other graph lineages.
Square Grids Paths Cycles Multi-Barbells
Square Grids 0.0096700 0.048162 0.046841 0.63429
Paths 0.30256 0.0018735 0.010300 2.1483
Cycles 0.27150 0.0083606 0.0060738 2.0357
Multi-Barbells 0.21666 0.75212 0.72697 0.029317

[tab:dist_table]

We note here that the idea of computing intra- and inter- lineage distances is similar to recent work (Hartle et al. 2020) computing distances between graph ensembles: certain classes of similarly-generated random graphs. Graph diffusion distance has been previously shown (in (Hartle et al. 2020)) to capture key structural information about graphs; for example, GDD is known to be sensitive to certain critical transitions in ensembles of random graphs as the random parameters are varied. This is also true for our time dilated version of GDD. More formally: let GpG_p and GpG_p^{'} represent random graphs on nn vertices, drawn from the Erdős-Renyi distribution with edge probability pp. Then D(Gp,Gp)D(G_p, G_p^{'}) has a local maximum at p=1np=\frac{1}{n}, representing the transition between disconnected and connected graphs. This is true for our distance as well as the original version due to Hammond.

Graph Limits

Here, we provide preliminary evidence that graph distance measures of this type may be used in the definition of a graph limit - a graphlike object which is the limit of an infinite sequence of graphs. This idea has been previously explored, most famously by Lovász (Lovász 2012), whose definition of a graph limit (called a graphon) is as follows: Recall the definition of graph cut-distance Dcut(G,H)D_{\text{cut}}(G, H) from Equation [defn:cut_dist], namely: the cut distance is the maximum discrepancy in sizes of edge-cuts, taken over all possible subsets of vertices, between two graphs on the same vertex-set. A graphon is then an equivalence class of Cauchy sequences of graphs1, under the equivalence relation that two sequences G1,G2,G_1, G_2, \ldots and H1,H2,H_1, H_2, \ldots are equivalent if Dcut(Gi,Hi)D_{\text{cut}}(G_i, H_i) approaches 0 as nn \rightarrow \infty.

We propose a similar definition of graph limits, but with our diffusion distance substituted as the distance measure used in the definition of a Cauchy sequence of graphs. Hammond et. al. argue in (Hammond, Gur, and Johnson 2013) why their variant of diffusion distance may be a more descriptive distance measure than cut-distance. More specifically, they show that on some classes of graphs, some edge deletions ‘matter’ much more than others: removal of a single edge changes the diffusive properties of the graph significantly. However, the graph-cut distance between the new and old graphs is the same, regardless of which edge has been removed, while the diffusion distance captures this nuance. For graph limits, however, our generalization to unequal-sized graphs via PP is of course essential. Furthermore, previous work (Borgs et al. 2019) on sparse graph limits has shown that in the framework of Lovász all sequences of sparse graphs converge (in the infinite-size limit) to the zero graphon. Graph convergence results specific to sparse graphs include the Benjamini-Schramm framework (Benjamini and Schramm 2001), in which graph sequences are compared using the distributional limits of subgraph frequencies. These two graph comparison methods both have the characteristic that the “limit object” of a sequence of graphs is rigorously defined. It is unclear that density functions on the unit square are the best choice of limit objects for graphs; while graphons have many nice properties as detailed by Lovász, other underlying limit objects may be a more natural choice for sparse graphs. In this section we attempt to show empirically that such a limit object of graph sequences under GDD may exist, and therefore merit further investigation.

We examine several sequences of graphs of increasing size for the required Cauchy behavior (in terms of our distance measure) to justify this variant definition of a “graph limit”. For each of the graph sequences defined in Section 5.1, we examine the distance between successive members of the sequence, plotting D2(Gn,Hn+1)D^2(G_n, H_{n+1}) for each choice of GG and HH. These sequences of distances are plotted in Figure [fig:g_limits].

In this figure, we see that generally distance diverges between different graph lineages, and converges for successive members of the same lineage, as nn \rightarrow \infty. We note the exceptions to this trend:

  1. The distances between nn-paths and n+1n+1-cycles appear to be converging; this is intuitive, as we would expect that difference between the two spectra due to distortion from the ends of the path graph would decrease in effect as nn \rightarrow \infty.

  2. We also show analytically, under similar assumtions, that the distance between successive path graphs also shrinks to zero (Theorem [thm:pa_lim_lem]).

We do not show that all similarly-constructed graph sequences display this Cauchy-like behavior. We hope to address this deeper question, as well as a more formal exploration of the limit object, with one or more modified versions of the objective function (see Section 3.8.1).

Limit of Path Graph Distances

In this section, we demonstrate analytically that the sequence of path graphs of increasing size is Cauchy in the sense described by the previous section. In the following theorem (Theorem [thm:pa_lim_lem]), we assume that the optimal value of tt approaches some value t̃\tilde{t} as nn \rightarrow \infty. We have not proven this to be the case, but have observed this behavior for both square grids and path graphs (see Figure [fig:path_lim] for an example of this behavior). Lemmas [lem:path_eigen_limit] and [thm:pa_lim_lem] show a related result for path graphs; we note that the spectrum of the Laplacian (as we define it in this paper) of a path graph of size nn is given by λk=2+2coskπn1k{0...n1}.\begin{aligned} \lambda_k = -2 + 2 \cos{\frac{k \pi}{n - 1}} \quad \quad k \in \{0 ... n-1\}. \nonumber \end{aligned}

[lem:path_eigen_limit] For any finite k,tk, t, we have limnn(et(2+2cos(πkn))et(2+2cos(πkn+1)))2=0\begin{aligned} \lim_{n \rightarrow \infty} n {\left( e^{t(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{t(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 = 0 \nonumber\end{aligned}

Clearly for finite k,tk,t limn(et(2+2cos(πkn))et(2+2cos(πkn+1)))=0\begin{aligned} \lim_{n \rightarrow \infty} {\left( e^{t(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{t(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)} = 0 \nonumber\end{aligned} Then, limnn(e2+2cos(πkn)e2+2cos(πkn+1))=limn(e2+2cos(πkn)e2+2cos(πkn+1))1n\begin{aligned} &\lim_{n \rightarrow \infty} n {\left( e^{-2 + 2 \cos(\frac{\pi k}{n})} - e^{-2 + 2 \cos(\frac{\pi k}{n+1})} \right)} \nonumber \\ &\quad \quad \quad = \lim_{n \rightarrow \infty} \frac{ {\left( e^{-2 + 2 \cos(\frac{\pi k}{n})} - e^{-2 + 2 \cos(\frac{\pi k}{n+1})} \right)}}{\frac{1}{n}} \nonumber\end{aligned} Evaluating this expression requires applying L’Hôpital’s rule. Hence, we have: limn(e2+2cos(πkn)e2+2cos(πkn+1))1n=limn2πkt(sin(πkn)e2t(cos(πkn)1)n2sin(πkn+1)e2t(cos(πkn+1)1)(n+1)2)1n2=2πktlimn(n2sin(πkn+1)e2t(cos(πkn+1)1)(n+1)2sin(πkn)e2t(cos(πkn)1)).\begin{aligned} &\lim_{n \rightarrow \infty} \frac{ {\left( e^{-2 + 2 \cos(\frac{\pi k}{n})} - e^{-2 + 2 \cos(\frac{\pi k}{n+1})} \right)}}{\frac{1}{n}} \nonumber \\ &\quad \quad = \lim_{n \rightarrow \infty} \frac{ 2 \pi k t \left(\frac{\sin \left(\frac{\pi k}{n}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n}\right)-1\right)}}{n^2}-\frac{\sin \left(\frac{\pi k}{n+1}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n+1}\right)-1\right)}}{(n+1)^2}\right) }{\frac{-1}{n^2}} \nonumber \\ &\quad \quad = 2 \pi k t \lim_{n \rightarrow \infty} \left(\frac{n^2 \sin \left(\frac{\pi k}{n+1}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n+1}\right)-1\right)}}{(n+1)^2}-\sin \left(\frac{\pi k}{n}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n}\right)-1\right)}\right). \nonumber \end{aligned} Since both of the limits $$\begin{aligned} \lim_{n \rightarrow \infty} \left(\frac{n^2 \sin \left(\frac{\pi k}{n+1}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n+1}\right)-1\right)}}{(n+1)^2} \right) \nonumber \\ \intertext{and} \lim_{n \rightarrow \infty} \left(-\sin \left(\frac{\pi k}{n}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n}\right)-1\right)}\right) \nonumber \end{aligned}$$ exist (and are 0), $$\begin{aligned} & 2 \pi k t \lim_{n \rightarrow \infty} \left(\frac{n^2 \sin \left(\frac{\pi k}{n+1}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n+1}\right)-1\right)}}{(n+1)^2}-\sin \left(\frac{\pi k}{n}\right) e^{2 t \left(\cos \left(\frac{\pi k}{n}\right)-1\right)}\right) =0 \nonumber \intertext{and therefore} & \lim_{n \rightarrow \infty} n {\left( e^{t(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{t(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 = 0 \nonumber\end{aligned}$$

[thm:pa_lim_lem] If $\lim_{n \rightarrow \infty} \arg \sup_t D^2\left( \left. \text{\emph{Pa}}_n, \text{\emph{Pa}}_{n+1} \right| t \right)$ exists, then: limnD2(Pan,Pan+1)=0.\begin{aligned} \lim_{n \rightarrow \infty} D^2 \left( \text{Pa}_n, \text{Pa}_{n+1} \right) = 0. \nonumber\end{aligned}

Assume that limnargsuptD2(Pan,Pan+1|t)=t̃\lim_{n \rightarrow \infty} \arg \sup_t D^2\left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| t \right) = \tilde{t}. Then, we must have limnD2(Pan,Pan+1)limnD2(Pan,Pan+1|t̃)\begin{aligned} \lim_{n \rightarrow \infty} D^2 \left( \text{Pa}_n, \text{Pa}_{n+1} \right) &\leq \lim_{n \rightarrow \infty} D^2 \left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| \tilde{t} \right) \nonumber\end{aligned} Hence, it remains only to prove that limnD2(Pan,Pan+1|t)=0\begin{aligned} \lim_{n \rightarrow \infty} D^2 \left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| t \right) = 0 \nonumber\end{aligned} for any finite tt (which will then include t̃\tilde{t}). First, for any particular (n+1)×n( n + 1) \times n subpermutation matrix SS, note that

D2(Pan,Pan+1|t)=infα>0infP|𝒞(P)D2(Pan,Pan+1|t,P,α)D2(Pan,Pan+1|t,α=1,Un+1TSUn)\begin{aligned} D^2 \left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| t \right) &= \inf_{\alpha > 0} \inf_{P | \mathcal{C}(P)} D^2 \left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| t, P, \alpha \right) \nonumber \\ &\leq D^2 \left( \left. \text{Pa}_n, \text{Pa}_{n+1} \right| t, \alpha=1 , U_{n+1}^T S U_n\right) \nonumber\end{aligned} Here, UnU_n and Un+1U_{n+1} are the matrices which diagonalize L(Pan)L(\text{Pa}_n) and L(Pan+1)L(\text{Pa}_{n+1}) respectively (note also that a diagonalizer of a matrix LL also diagonalizes eLe^{L}). If at each nn we select SS to be the subpermutation S=[I0]S = \left[ \begin{array}{c} I \\ 0 \end{array}\right], then (using the same argument as in Theorem [thm:LAP_bound]) the objective function simplifies to: D2(Pan,Pan+1|t,P=Un+1TSUn,α=1)=||SecΛPanecΛPan+1S||F2=k=0n1(ec(2+2cos(πkn))ec(2+2cos(πkn+1)))2max0kn1n(ec(2+2cos(πkn))ec(2+2cos(πkn+1)))2\begin{aligned} & D^2\left(\text{Pa}_n, \text{Pa}_{n+1} \right| t, P = U_{n+1}^T S U_n, \alpha=1) \nonumber \\ &\quad = {\left| \left| S e^{c \Lambda_{\text{Pa}_n}} - e^{c \Lambda_{\text{Pa}_{n+1}}} S \right| \right|}_F^2 \nonumber \\ &\quad = \sum_{k=0}^{n-1} {\left( e^{c(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{c(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 \nonumber \\ &\quad \leq \max_{0 \leq k \leq n-1} n {\left( e^{c(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{c(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 \nonumber\end{aligned} By Lemma [lem:path_eigen_limit], for any finite k,tk, t, we have limnn(et(2+2cos(πkn))et(2+2cos(πkn+1)))2=0\begin{aligned} \lim_{n \rightarrow \infty} n {\left( e^{t(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{t(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 = 0 \nonumber\end{aligned} So for any ϵ>0\epsilon > 0, N\exists N such that when nNn \geq N, for any c,kc, k, n(ec(2+2cos(πkn))ec(2+2cos(πkn+1)))2<ϵ\begin{aligned} n {\left( e^{c(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{c(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 < \epsilon \nonumber\end{aligned} But then k=0n1(ec(2+2cos(πkn))ec(2+2cos(πkn+1)))2<ϵ\begin{aligned} \sum_{k=0}^{n-1} {\left( e^{c(-2 + 2 \cos(\frac{\pi k}{n}))} - e^{c(-2 + 2 \cos(\frac{\pi k}{n+1}))} \right)}^2 < \epsilon \nonumber\end{aligned} as required. Thus, the Cauchy condition is satisfied for the lineage of path graphs Pan\text{Pa}_n

Limiting behavior of D and two parameters as path graph size approaches infinity. All distances were calculated between \text{Path}_{n} and \text{Path}_{n+1}. We plot the value of the objective function, as well as the optimal values of \alpha and t, as n \rightarrow \infty. Optimal \alpha rapidly approach 1 and the optimal distance tends to 0. Additionally, the optimal t value approaches a constant (t \approx .316345), providing experimental validation of the assumption we make in proving Theorem [thm:pa_lim_lem].
Limiting behavior of DD and two parameters as path graph size approaches infinity. All distances were calculated between Pathn\text{Path}_{n} and Pathn+1\text{Path}_{n+1}. We plot the value of the objective function, as well as the optimal values of α\alpha and tt, as nn \rightarrow \infty. Optimal α\alpha rapidly approach 1 and the optimal distance tends to 0. Additionally, the optimal tt value approaches a constant (t.316345)(t \approx .316345), providing experimental validation of the assumption we make in proving Theorem [thm:pa_lim_lem].

[fig:path_lim]

Given a graph lineage which consists of levelwise box products between two lineages, it seems natural to use our upper bound on successive distances between graph box products to prove convergence of the sequence of products. As an example, the lineage consisting of square grids is the levelwise box product of the lineage of path graphs with itself. However, in this we see that this bound may not be very tight. Applying Equation [eqn:graph_product_special_case] from Theorem [thm:sq_grid_lim_lem], we have (for any tc,αct_c, \alpha_c): D(Sqn,Sqn+1)D(Sqn,Sqn+1|tc,αc)D(Pan+1,Pan+1|tc,αc)(||etcacL(Pan)||F+||etcacL(Pan+1)||F)\begin{aligned} D \left( \left. \text{Sq}_n, \text{Sq}_{n+1} \right. \right) &\leq D \left( \left. \text{Sq}_n, \text{Sq}_{n+1} \right| t_c, \alpha_c \right) \nonumber \\ &\leq D \left( \text{Pa}_{n+1}, \text{Pa}_{n+1} \left. \right| t_c, \alpha_c \right) \left( {\left| \left| e^{\frac{t_c}{a_c} L(\text{Pa}_n)} \right| \right|}_F \right. \nonumber \\ & \quad \quad \quad \quad + \left. {\left| \left| e^{t_c a_c L(\text{Pa}_{n+1})} \right| \right|}_F \right) \nonumber \end{aligned} As we can see in Figure [fig:sq_dist_vs_bound], the right side of this inequality seems to be tending to a nonzero value as nn \rightarrow \infty, whereas the actual distance (calculated by our optimization procedure) appears to be tending to zero.

Comparison of the distance D(\text{Sq}_n, \text{Sq}_{n+1}) as a function of n, to the upper bound calculated as the optimum of distance between \text{Pa}_n and \text{Pa}_{n+1}. We see that the upper found converges to some constant D \approx 0.01782, whereas the actual distance appears to be converging to 0 as n \rightarrow \infty.
Comparison of the distance D(Sqn,Sqn+1)D(\text{Sq}_n, \text{Sq}_{n+1}) as a function of nn, to the upper bound calculated as the optimum of distance between Pan\text{Pa}_n and Pan+1\text{Pa}_{n+1}. We see that the upper found converges to some constant D0.01782D \approx 0.01782, whereas the actual distance appears to be converging to 0 as nn \rightarrow \infty.

[fig:sq_dist_vs_bound]

Cauchy-like behavior of graph distance as a function of sequence index, n. The distance between successive square grids and all other graph sequences appears to diverge (the same behavior is seen for k-barbells). Notably, the distance between \text{Grid}_{n \times n} and \text{Grid}_{(n+1) \times (n+1)} does not appear to converge, until much higher values of n (n > 100) than the other convergent series. This may be because the distances calculated are an upper bound, and may be converging more slowly than the ‘true’ optima.
Cauchy-like behavior of graph distance as a function of sequence index, nn. The distance between successive square grids and all other graph sequences appears to diverge (the same behavior is seen for k-barbells). Notably, the distance between Gridn×n\text{Grid}_{n \times n} and Grid(n+1)×(n+1)\text{Grid}_{(n+1) \times (n+1)} does not appear to converge, until much higher values of nn (n>100)(n > 100) than the other convergent series. This may be because the distances calculated are an upper bound, and may be converging more slowly than the ‘true’ optima.

[fig:g_limits]


  1. Here we are calling a sequence of graphs “Cauchy” if for any ϵ>0\epsilon > 0 there is some NN such that for all n,mNn, m \geq N, Dcut(Gn,Gm)<ϵD_{\text{cut}}(G_n, G_m) < \epsilon.