1 Introduction
Imagebased dynamic reconstruction addresses the modeling and estimation of the spatiotemporal relationships among nonstationary scene elements and the sensors observing them. This work tackles estimating the geometry (i.e. the Euclidean coordinates) of a temporally evolving set of 3D points using as input unsynchronized 2D feature observations with known imaging geometry. Our problem, which straddles both trajectory triangulation and image sequencing, naturally arises in the context of uncoordinated distributed capture of an event (e.g. crowdsourced images or video) and highlights a pair of open research questions: How to characterize and model spatiotemporal relationships among the observations in a datadependent manner? What role (if any) may available spatial and temporal priors play within the estimation process? The answer to both these questions is tightly coupled to the level of abstraction used to define temporal associations and the scope of the assumptions conferred upon our observations. More specifically, the temporal abstraction level may be quantitative or ordinal (i.e. capture timestamps vs. sequencing), while the scope of the assumptions may be domainspecific (i.e. temporal sampling periodicity/frequency, choice of shape/trajectory basis) or crossdomain (physicsbased priors on motion estimates).
Estimating either absolute or relative temporal values for our observations would require explicit assumptions on the observed scene dynamics and/or the availability of sampling temporal information (e.g. image timestamps or sampling frequency priors). In the absence of such information or priors, we strive to estimate observation sequencing based on datadependent adjacency relations defined by a pairwise affinity measure. Towards this end, we make the following assumptions: A1) 2D observations are samples of the continuous motion of a 3D point set; A2
) the (unknown and arbitrary) temporal sampling density allows approximate local linear interpolation of 3D geometry; and
A3) temporal proximity implies spatial proximity, but not viceversa (e.g. repetitive or selfintersecting motion). Under such tenets, we can address multiview capture scenarios comprised of unsynchronized image streams or the more general case of uncoordinated asynchronous photography.We solve a dictionary learning instance enforcing a discrete differential geometry model, where each dictionary atom corresponds to a 3D estimate, while the set of sparse coefficients describes the spatiotemporal relations among our observations. Our contributions are:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

A graphtheoretic formulation of the dynamic reconstruction problem, where 2D observations are mapped to nodes, 3D geometry are node attributes, and spatiotemporal affinities correspond to graph edges.

The definition and enforcement of spatiotemporal priors, (e.g. anisotropic smoothness, topological compactness/sparsity, and multiview reconstructability) in terms of the discrete Laplace operator.

Integration of available perstream (e.g. intravideo) sequencing info into global ordering priors enforced in terms of the Laplacian spectral signature.
2 Related work
Dynamic reconstruction in the absence of temporal information is an underconstrained problem akin to single view reconstruction [5, 6, 18, 29, 28, 24]. Some prior work in trajectory triangulation operate under the assumptions of known sequencing info and/or constrained motion priors. Along these lines, Avidan and Shashua [6] estimate dynamic geometry from 2D observations of points constrained to linear and conical motions. However, under the assumption of dense temporal motion sampling, the concept of motion smoothness has been successfully exploited [25, 26, 45, 46, 35, 42, 43, 36, 30, 31]. Park et al. [25] triangulate 3D point trajectories by the linear combination of Direct Cosine Transform trajectory bases with the constraint of a reprojection system. Such a trajectory basis method has low reconstructability when the number of the bases is insufficient and/or the motion correlation between object and camera is large. In [26], Park et al. select number of bases by an Nfold cross validation scheme. Zhu et al. [45] apply norm regularization to the basis coefficients to force the sparsity of bases and improve the reconstructability by including a small number of keyframes, which requires user interaction. Valmadre et al. [35] reduce the number of trajectory bases by setting a gain threshold depending on the basis nullspace and propose a method using a highpass filter to mitigate low reconstructability for scenarios having no missing 2D observations. Zheng et al. [43, 42] propose a dictionary learning method to estimate the 3D shape with partial sequencing info, assuming 3D geometry estimates may be approximated by local barycentric interpolation (i.e. selfexpressive motion prior) and developed a biconvex framework for jointly estimating 3D geometry and barycentric weights. However, uniform penalization of selfexpressive residual error and fostering symmetric weight coefficients, handicap the approach against nonuniform density sampling. Vo et al. [36] present a spatiotemporal bundle adjustment which jointly optimizes camera parameters, 3D static points, 3D dynamic trajectories and temporal alignment between cameras using explicit physics priors, but require frameaccurate initial time offset and low 2D noise. Efforts at developing more detailed spatiotemporal models within the context of NRSFM include [2, 3, 4].
Temporal alignment is a necessary preprocessing step for most dynamic 3D reconstruction methods. Current video synchronization or image sequencing [8, 21, 39, 23, 14, 9] rely on the image 2D features, foregoing the recovery of the 3D structure. Featurebased sequencing methods like [8, 39, 33] make different assumptions on the underlying imaging geometry. For example, while [8] favors an approximately static imaging geometry, [39] prefers viewing configurations with large baselines. Basha et al. [21] overcomes the limitation of static cameras and improves accuracy by leveraging the temporal info of frames in individual cameras. Padua et al. [23] determines spatiotemporal alignment among a partially order set of observation by framing the problem as mapping of observations into a single line in , which explicitly imposes a total ordering. Unlike previous methods, Gaspar et al [16] propose a synchronization algorithm without tracking corresponding feature between video sequences. Instead, they synchronize two videos by the relative motion between two rigid objects. Tuytelaars et al. [34] determined sequencing based on the approximate 3D intersections of viewing rays under an affine reference frame. Ji et al. [19] jointly synchronize a pair of video sequences and reconstruct their commonly observed dense 3D structure by maximizing the spatiotemporal consistency of twoview pixel correspondences across video sequences.
3 Graphbased Dynamic Reconstruction
For a set of 2D observations in a single image with known viewing parameters, there is an infinite set of plausible 3D geometry estimates which are compliant with a pinhole camera model. We posit that for the asynchronous multiview dynamic reconstruction of smooth 3D motions, the constraints on each 3D estimate can be expressed in terms of its temporal neighborhood. That is, we aim to enforce spatial coherence among successive 3D observations without the reliance on instancespecific spatial or temporal models. It is at this point that we come to a chickenegg problem, as we need to define a notion of temporal neighborhood in the context of uncontrolled asynchronous capture w/o timestamps or sampling frequency priors. To address this conundrum we use spatial proximity as a proxy for temporal proximity, which (as prescribed by our third assumption, i.e. A3) is not universally true. Moreover, given that observed events ”happen” over a continuous 1D timeline, we would also like to generalize our notion proximity into one of adjacency, so as to be able to explicitly define the notion of a local neighborhood. Towards this end, we pose the dynamic 3D reconstruction problem in terms of discrete differential geometry concepts.
3.1 Notation and Preliminaries
We consider dynamic 3D points observed in images with known intrinsic and extrinsic camera matrices and . The 2D observation of in is denoted by , while its 3D position is denoted by .
Euclidean Structure Matrix. The position of all 3D points across all images is denoted by the matrix
(1) 
where each row vector
specifies the 3D Euclidean coordinates of a point. Each matrix row , represents the 3D shape of the points in frame .Structure Motion Graph. We define a fully connected graph , and map each input image to a vertex . A multivalue function maps a vertex into a point in the shape space, allowing the interpretation . Edge weight values are defined by an affinity function relating points in our shape space, such that .
Discrete Laplace operator. The Laplace operator is a second differential operator in dimensional Euclidean space, which in Cartesian coordinates equals to the sum of unmixed second partial derivatives. For a weighted undirected graph , the discrete Laplace operator is defined in terms of the Laplacian matrix:
(2) 
where
is the graph’s symmetric affinity matrix, whose values
correspond to the edge weights , and is the graph’s diagonal degree matrix, whose values are the sum of the corresponding row in . ^{1}^{1}1Alternative definitions have been used in [15, 38, 41, 32, 44, 10, 12]. is positive semidefinite, yielding . When convenient, we obviate the explicit dependence of on .Affinity Matrix Decomposition. The pairwise affinity function (relating our 3D estimates) is implicitly defined in terms of the estimated entries . Importantly, these affinity values also encode the graph’s local topology (i.e. connectivity). Given the a priori unknown topology and distribution of our 3D estimates, we make the following design choices: 1) is not assumed to be symmetric, yielding a directed structure graph. 2) we explicitly model the decomposition , which follows from Eq. (2),
(3) 
This decomposition decouples the estimation of each node’s degree value (encoded in ), from the relative affinity weight values for the node’s local neighborhood (encoded in ).
3.2 Geometric Rationale
We leverage the interdependencies among our 3D motion estimates and its discrete Laplace operator , through an optimization framework for their joint estimation. In practice, describes the topology of the given structure in terms of an affinity function . The values constitute the entries of the affinity matrix relating the 3D shapes observed at frames and . These individual values are determined through the estimation of the and variables within our optimization framework. Hence, the affinity function will not be explicitly defined, but rather its values will be instantiated from the results of our optimization, which builds upon the following geometric observations.
Remark 1 (Anisotropic Smoothness Prior).
The norm of the Laplacian’s linear form (), tends to vanish when any given function value approximates the (affinityweighted) average of in its local neighborhood. This follows from the pointwise Laplacian definition
(4) 
This implies approximately linear 3D motion segments allow accurate barycentric interpolation from as little as two neighboring 3D motion samples. Conversely, the penalty for poorly approximated nonlinear motion segments may be mitigated by the multiplicative contribution of the degree value towards the affinity value, i.e.
Remark 2 (Collapsing Neighborhood Prior).
The trace of the Laplacian’s quadratic form () tends to vanish as the local neighborhood becomes sparser and more compact, this follows from
(5) 
This implies sparsity in global affinity, while nonzero values imply proximity among 3D samples and .
Remark 3 (Spectral Sequencing Prior).
Any line mapping of into a vector constitutes an ordering of the graph vertices. Accordingly, when is a known and constant affinity preserving mapping, the nontrivial minimization of will yield entries in approximating the affinities encoded in . This follows from
(6) 
This implies enforcing global sequencing priors by coupling ’s spectral signature to an input vector .
3.3 Optimization Cost Function
Based on the geometric properties encoded by the discrete Laplace operator the formulate the optimization problem:
(7) 
where denotes the aggregation of all input 2D observations and their camera parameters. Each cost function term addresses a particular aspect of our optimization. fosters local smoothness, fosters a linear topological structure, fosters strong convergence among viewing rays, while reduces reprojection errors. For simplicity, we define the problem variables in terms of and . However, given the explicit dependence of on , we’ll redefine the joint optimization of Eq. (7), as a triconvex optimization problem over , , and .
The next two sections describe the functional models (, , , and ) utilized in Eq. (7), the structure of the estimation variables (, , and ), and the constraints applicable to them. We present two variants of our general framework, addressing, respectively, the absence and the estimation of global temporal sequencing priors on the elements of .
4 Solving for Asynchronous Photography
We consider an unordered image set , and rely on the Collapsing Neighborhood Prior to estimate an affinity function matrix whose connectivity approximates a chainstructure connectivity. We interpret such connectivity as temporal ordering relations among our observations.
Enforcing anisotropic smoothness. The functional form
(8) 
defines the first term of Eq. (7). Minimizing w.r.t. attracts function values towards the convex hull defined by all in its local neighborhood. Conversely, minimizing w.r.t. (i.e. , ) fosters the selection of neighboring nodes whose mappings facilitate barycentric interpolation. Here, selection refers to assigning nonzero values in the affinity matrix.
The values in each row of (i.e. ) represent the relative affinity weights for . Hence, we enforce 1) the sum of each row equal to 1, and 2) strict nonnegativity of all entries in .Moreover, represents the outdegree for each node in the directed graph, akin to a global density estimate. We decouple node degree values from the relative affinity weights in . We enforce strictly positive degree values , requiring connectivity to at least one adjacent node.
Enforcing Neighborhood Locality. For a directed graph, we define the trace of the Laplacian quadratic form as
(9) 
Where combines the outdegree and indegree Laplacian matrix, and is compliant with the definition in Eq. (5). Diagonal entries of the matrix are the Laplacian quadratic form for each dimension of , and the functional form of in Eq. (7) is given by their sum:
(10) 
Minimizing w.r.t. (i.e. fixing ) attracts the estimates of neighboring elements to be near to . Conversely, minimizing w.r.t. , fosters the selection of nearby nodes to form a compact neighborhood, as defined by the weighted sum of the magnitude of the difference vectors , .
Enforcing Observation Ray Constrains. We penalize the distance of a 3D point to its known viewing ray using where is a unit vector parallel to the viewing ray and camera pose parameters are given by [43]. The functional form of from Eq. (7) is
(11) 
which is quadratic for . The value of depends on the 2D noise level and the mean cameratoscene distance.
Enforcing Multiview Reconstructability.
Viewing geometry plays a determinant role in the overall accuracy of our 3D estimates (see section 7 for a detailed analysis). Intuitively, for moderatetohigh 2D noise levels, the selection of temporally adjacent cameras with small baselines will amplify 3D estimation error. In order to foster the selection of cameras having favorable convergence angles among viewing rays corresponding to the same feature track, we define the functional form of from Eq. (7) as
(12) 
5 Solving for Unsynchronized Image Streams
Given an image set comprised of the aggregation of multiple image streams, we ascertain partial sequencing (i.e. within disjoint image subsets). We use this info in two different ways: First, we enforce spatial smoothness among successive observations from a common stream. Second, we integrate disjoint local sequences into a global sequencing estimate we enforce through our optimization.
Enforcing IntraSequence Coherence.
We define , where constitutes the variable component of our estimation, while encodes small additive values for the immediately prior and next frames from the same image stream. The collapsing neighborhood prior will enforce such pseudoadjacent 3D estimates to be similar.
Manipulating the Spectral Signature of .
For a given global sequencing prior, in the form of a line embedding of all our graph nodes, we modify Eq. (10) to be
(13) 
We now describe how we determine such line embedding .
Integrating Global Sequencing Priors.
Our goal is to integrate preliminary (e.g. initialization) geometry estimates, , with reliable but partial sequencing information (e.g. single video frame sequencing) into a global sequencing prior. Towards this end, we pose image sequencing from a given 3D structure as a dimensionality reduction instance, where the goal is to find a line mapping which preserves (as much as possible) pairwise proximity relations among 3D estimates.
While using Euclidean distance as a pairwise proximity measure is suitable for approximately linear motion, nonlinear motion manifolds (i.e. repetitive or selfintersecting motions) may collapse temporally distant observations to proximal locations in the line embedding.
Arc Distance through Dynamic Time Warping. We define approximate 3D trajectory arc distance for shapes within sequenced images streams, as the sum of 3D line segment lengths among adjacent observations, see Fig. 2(a). To generalize this notion across image streams, we perform global approximate intersequence registration through Dynamic Time Warping (DTW). Our goal is to assign to each 3D estimate along trajectory the closest line segment in each of the other trajectories , without violating any sequencing constraints in our assignments, which we define
(14) 
Once all assignments are made, intersequence arclength between and is trivially computed as the sum of 1) distance to the element in the line segment closest to the , plus 2) the intrasequence arc distance between and . Fig. 2(b) illustrates the arc distance from points between and as the length of green line.
Dimensionality Reduction Methods. We use arc length to define a pairwise distance matrix , from which we attain a vector embedding through Spectral Ranking (SR) [15, 13] and Multidimensional Scaling (MDS) [1]. Sequencing is attained by sorting . Alternatively, we interpret as a complete graph’s weight matrix and find the approximate shortest Hamiltonian path (SHP). Table 1 compares these methods operating on and the Euclidean distance matrix , both matrices were computed from and , which denote respectively, the initial 3D structure and the estimated 3D structure after our optimization.
Linear motion  Nonlinear motion  Repeating motion  

SR  0.9956  0.9996  0.9807  0.9991  0.6754  0.7140  
0.9965  1  0.9570  1  0.9711  0.9934  
MDS  0.9943  1  0.7614  0.7044  0.6421  0.6553  
0.9961  1  0.8741  1  0.9316  0.9732  
SHP  1  1  0.4368  0.9996  0.3329  0.7912  
1  1  0.5325  0.9996  0.3947  0.7934 
6 Optimization
Eq. (7) is a triconvex function for variable blocks , and . We use the ACS [17] strategy, alternatively optimizing over each variable block while fixing the other two. For the first iteration, we initialize and (to be described), then we alternatively optimize over each variable blocks in the order of , and until (thresholded) convergence of our cost function among successive iterations.
Optimizing over . While variable blocks and are fixed, the cost function (7) is a quadratic equation for block without any constraints. The solution for this quadratic programming problem is the set of variable values found at the zeros of the derivative of the cost function.
Optimizing over . With and fixed, minimizing , , and , respectively, yield a quadratic equation, linear equation and constant value for , making the cost function a quadratic equation for
(15)  
s.t. 
Each row of is independent and is solved as a quadratic programming problem with linear constrains. We optimize each row in parallel by the ActiveSet method in [11].
Optimizing over . When and are fixed, optimizing Eq. (7) yields a quadratic equation in terms of the diagonal values of . We optimize the same equation as Eq. (15), but with linear constrains , normalizing the outdegree sum to one.
Optimizing for the spectral sequencing prior When optimizing over or , the matrix is replaced by a vector , computed from the current estimate of , through one of the dimensionality reduction methods described earlier (e.g. MDS applied to ) Hence, the second term becomes
(16) 
When using MDS as the dimensionality reduction method, approximately preserves the pairwise Arc distance, allowing direct implementation within Eq. (16). When using SR, corresponds to the graph’s Fiedler vector, whose entry values range from 1 to 1; requiring a uniform scaling in order to match the range of the current structure estimate .
Initialization. We initialize the degree matrix to be . We initialize the 3D structure observed in by the approximate twoview pseudotriangulation of each viewing ray with its corresponding viewing ray from the most convergent image , which is the with the minimum aggregated pseudotriangulation error when considering all commonly observed points.
7 Structure Reconstruction Accuracy.
We analyze how the Lapalacian linear and quadratic forms influence the accuracy of our estimates of , assuming: 1) is fixed, 2) encodes ground truth temporal adjacency, and 3) noise free 2D observations. This equates to optimizing Eq. (7) while omitting terms and , yielding
(17) 
We denote the ground truth structure as and since each point is independently estimated, we analyze the condition of one point per shape. Then, as a point along a viewing ray is where the unknown variables are the signed distance from ground truth along the viewing ray, and is the reconstruction error (i.e. depth error). Eq. (17) is an unconstrained quadratic programming problem, solved by setting the derivative over to zero; yielding to
(18)  
(19)  
(20) 
where is an matrix and is an vector whose th element is , and denotes the th column of and denotes the th row. From Eq. (18), we attain the lower and upper bounds for reconstruction error as
(21) 
Imaging geometry convergence. We consider two cameras alternating the capture of a motion sequence, which are placed sufficiently far from the motion center , such that the viewing ray convergence angle for all joints can be approximated by the angle between the cameras to the motion center. We vary from 0 to as in Fig. 3(a).and evaluate the reconstruction error and upper bounds, which as shown in Fig. 3(b) decrease as viewing rays approach orthogonality.
3D motion observability. The vector in Eq. (20), lies on a local motion plane formed by and it’s two neighboring points. Similarly,each row in will also be a vector on a local motion plane. For smooth motion under dense sampling, a triplet of successive local motion planes can be approximated by a common 3D plane . Hence, the vector will be contained in , yielding smaller values of as and the viewing rays near orthogonality.In Fig. 3(c), we consider a circle motion observed by two cameras with constant convergence angle, pointing to the motion center. In this configuration, and are nearly constant. We vary the angle between the viewing directions and the motion plane . Fig. 3(d) shows more accurate reconstruction is attained for viewing directions near orthogonal to the motion plane.
8 Experiments





8.1 Motion Capture Datasets
We synthesize 2D features of human 3D motions for 31 joints with frame rates of 120 Hz [22]. We choose 10 sample motions, each having on average 300 frames. We use the 3D joint positions as ground truth dynamic structure and project them to each frame on four virtual cameras as 2D observations. All cameras have resolution and 1000 focal length, are static with a distance of 3 meters around the motion center. The four cameras are unsynchronized, with frame rate up to 30 Hz. Accuracy is quantified by mean 3D reconstruction error. Our method discrete Laplace operator estimation (DLOE) is compared against selfexpressive dictionary learning (SEDL)[43], trajectory basis (TB)[26], highpass filter (HPF)[35] and the pseudotriangulation approach in Sec. 6. SEDL requires partial sequencing information. TB and HPF require complete ground truth sequencing. We include a version of our method leveraging ground truth sequencing by enforcing structural constraints on similarly to HPF.
Varying 2D noise
. We add white noise on the 2d observation with std. dev. from 1 to 5 pixels. The parameters
and are fixed as 0.0015 and 0.02. Per Fig. 4(a), reconstruction accuracy degrades as the 2d observation error increases. Our method is competitive with frameworks requiring sequencing info such as TB and HPF.Varying frame rates. We temporally downsample the motion capture datasets
and perform experiments at frames rates of 30 Hz, 15Hz and 7.5 Hz, without 2D observation noise. As shown in Fig. 4(b), without sequencing info, our method outperforms SEDL for lower frame rates. Results for methods using full sequencing info are comparable.
Missing data. We randomly decimate 10% to 50% of total 3D points before projection onto the virtual cameras. Reconstruction error comparisons are restricted to SEDL and TB, as other methods don’t recover missing joints. Per Fig. 4(c), our method has lower reconstruction error, across all missing data levels, compared to SEDL with partial sequencing info and TB with full sequencing info.
Nonuniform density. We randomly drop 10% to 50% of total frames from the motion sequence. The reconstruction error increases disproportionately for the other methods compared to ours, as depicted in Fig. 4(d).
Execution run times.
Average run times for our Matlab implementation on an Intel i78700K CPU for optimizing each of our three variables are plotted in Fig. 7(a), reconstructing features over a variable number of frames . Time complexity for optimizing over using an ActiveSet method [11] is , where is the number of nonzero values in the activeset. However, the number of estimation variables for this stage is only . Optimizing takes since we use the same solver for each row of .Optimizing over is an unconstrained convex quadratic programming problem equating to solving a linear system of equations with time complexity of .
Average running time for minimizing either or are smaller due to the sparsity of .
Total number of iterations depends on initialization quality, reported experiments ran an average of 62.26 iterations.
Ablation Analysis.
We analyze the contribution of the different terms in Eq. (7) toward reconstruction accuracy for scenarios of moderatetohigh 2D noise levels. Fig. 7(b) shows results for multiple variants. The observation ray term is common to all variants. Best performance is achieved by the instance optimizing over all geometric terms.
8.2 Multiview Video and Image Datasets
Experiments on imagery with known camera geometry include Juggler[7], Climb [25] and Ski[27] datasets. We unsynchronized images by removing concurrent observations, randomly selecting a single camera when multiple images shared a common timestamp. Timestamps were only used for eliminating concurrency. For Juggler we use as 2D features the joint positions detected by [40]. For Climb and Ski we used the provided 2D feature tracks and 2D joint detection locations, respectively. Fig. 6 illustrates our results and describes the experimental setup.
8.3 Application to Event Segmentation
We consider the case of dynamic reconstruction of spatially colocated, but temporally disjoint events captured in a single aggregated image set. For such scenario we obtain a Laplacian matrix describing a graph with multiple connected components, one per each event. Importantly, for each component we sequence its images and reconstruct its dynamic 3D geometry. Spectral analysis of the Laplacian matrix visualizes the chainlike topology of each of these events/clusters, see Fig. 6(a) top right.
8.4 Application to MultiTarget Scenarios
Given subjects observed in images, our aggregated shape representation requires solving data associations of input 2D features among subjects across images [37]. To this end, we leverage DLOE’s event segmentation capabilities (section 8.3) as follows: 1) For each input , we create a proxy image for each subject observed therein. 2) Execute DLOE on the aggregated set of proxy images (each observing 3D points) to reconstruct each subject’s motion as a distinct event. 3) Associate 3D estimates of based on their common ancestor , providing a coalesced spatiotemporal context for each reconstructed event. 4) Aggregate the 2D features of all sibling into a single 2D shape representation, enforcing data associations from each event. 5) Run DLOE on the aggregated representation over the original input images, to improve the decoupled event reconstructions from step 2. Fig. 6(b) shows our workflow results for a twotarget scenario.
9 Conclusion
We presented a dataadaptive framework for the modeling of spatiotemporal relationships among visual data. Our triconvex optimization framework outperforms state of the art methods for the challenging scenarios of decreasing and irregular temporal sampling. The generality of the formulation and internal data representations suggest robust dynamic 3D reconstruction as a data association framework for video.
References
 [1] Hervé Abdi. Metric multidimensional scaling (mds): analyzing distance matrices. Encyclopedia of measurement and statistics. Sage, Thousand Oaks, CA, pages 1–13, 2007.
 [2] Antonio Agudo and Francese MorenoNoguer. Deformable motion 3d reconstruction by union of regularized subspaces. In 2018 25th IEEE International Conference on Image Processing (ICIP), pages 2930–2934. IEEE, 2018.
 [3] Antonio Agudo and Francesc MorenoNoguer. A scalable, efficient, and accurate solution to nonrigid structure from motion. Computer Vision and Image Understanding, 167:121–133, 2018.
 [4] Antonio Agudo and Francesc MorenoNoguer. Robust spatiotemporal clustering and reconstruction of multiple deformable bodies. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(4):971–984, 2019.

[5]
Shai Avidan and Amnon Shashua.
Trajectory triangulation of lines: Reconstruction of a 3d point
moving along a line from a monocular image sequence.
In
Proceedings of the 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
, volume 2, pages 62–66. IEEE, 1999.  [6] Shai Avidan and Amnon Shashua. Trajectory triangulation: 3d reconstruction of moving points from a monocular image sequence. IEEE Transactions on Pattern Analysis & Machine Intelligence, (4):348–357, 2000.
 [7] Luca Ballan, Gabriel J Brostow, Jens Puwein, and Marc Pollefeys. Unstructured videobased rendering: Interactive exploration of casually captured videos. ACM Transactions on Graphics, 29(4):Article–No, 2010.
 [8] Tali Basha, Yael Moses, and Shai Avidan. Photo sequencing. In European Conference on Computer Vision, pages 654–667. Springer, 2012.
 [9] Yaron Caspi, Denis Simakov, and Michal Irani. Featurebased sequencetosequence matching. International Journal of Computer Vision, 68(1):53–64, 2006.
 [10] Mo Chen, Qiong Yang, and Xiaoou Tang. Directed graph embedding. In IJCAI, pages 2707–2712, 2007.
 [11] Yuansi Chen, Julien Mairal, and Zaid Harchaoui. Fast and robust archetypal analysis for representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1478–1485, 2014.
 [12] Fan Chung. Laplacians and the cheeger inequality for directed graphs. Annals of Combinatorics, 9(1):1–19, 2005.
 [13] Fan RK Chung. Spectral graph theory. Number 92. American Mathematical Soc., 1997.
 [14] Ahmed Elhayek, Carsten Stoll, Kwang In Kim, HP Seidel, and Christian Theobalt. Featurebased multivideo synchronization with subframe accuracy. In Joint DAGM (German Association for Pattern Recognition) and OAGM Symposium, pages 266–275. Springer, 2012.
 [15] Fajwel Fogel, Alexandre d’Aspremont, and Milan Vojnovic. Serialrank: Spectral ranking using seriation. In Advances in Neural Information Processing Systems, pages 900–908, 2014.
 [16] Tiago Gaspar, Paulo Oliveira, and Paolo Favaro. Synchronization of two independently moving cameras without feature correspondences. In European Conference on Computer Vision, pages 189–204. Springer, 2014.
 [17] Jochen Gorski, Frank Pfeuffer, and Kathrin Klamroth. Biconvex sets and optimization with biconvex functions: a survey and extensions. Mathematical methods of operations research, 66(3):373–407, 2007.
 [18] Mei Han and Takeo Kanade. Reconstruction of a scene with multiple linearly moving objects. International Journal of Computer Vision, 59(3):285–300, 2004.
 [19] Dinghuang Ji, Enrique Dunn, and JanMichael Frahm. Spatiotemporally consistent correspondence for dense dynamic scene modeling. In European Conference on Computer Vision, pages 3–18. Springer, 2016.
 [20] Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, et al. Panoptic studio: A massively multiview system for social interaction capture. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(1):190–204, 2017.
 [21] Yael Moses, Shai Avidan, et al. Spacetime tradeoffs in photo sequencing. In Proceedings of the IEEE International Conference on Computer Vision, pages 977–984, 2013.
 [22] Meinard Müller, Tido Röder, Michael Clausen, Bernhard Eberhardt, Björn Krüger, and Andreas Weber. Documentation mocap database hdm05. 2007.
 [23] Flavio Padua, Rodrigo Carceroni, Geraldo Santos, and Kiriakos Kutulakos. Linear sequencetosequence alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2):304–320, 2010.
 [24] Hyun Soo Park and Yaser Sheikh. 3d reconstruction of a smooth articulated trajectory from a monocular image sequence. In 2011 International Conference on Computer Vision, pages 201–208. IEEE, 2011.
 [25] Hyun Soo Park, Takaaki Shiratori, Iain Matthews, and Yaser Sheikh. 3d reconstruction of a moving point from a series of 2d projections. In European Conference on Computer Vision, pages 158–171. Springer, 2010.
 [26] Hyun Soo Park, Takaaki Shiratori, Iain Matthews, and Yaser Sheikh. 3d trajectory reconstruction under perspective projection. International Journal of Computer Vision, 115(2):115–135, 2015.
 [27] Helge Rhodin, Jörg Spörri, Isinsu Katircioglu, Victor Constantin, Frédéric Meyer, Erich Müller, Mathieu Salzmann, and Pascal Fua. Learning monocular 3d human pose estimation from multiview images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8437–8446, 2018.
 [28] Dana Segal and Amnon Shashua. 3d reconstruction from tangentofsight measurements of a moving object seen from a moving camera. In European Conference on Computer Vision, pages 621–631. Springer, 2000.
 [29] Amnon Shashua, Shai Avidan, and Michael Werman. Trajectory triangulation over conic section. In Proceedings of the Seventh IEEE International Conference on Computer Vision, volume 1, pages 330–336. IEEE, 1999.
 [30] Tomas Simon, Jack Valmadre, Iain Matthews, and Yaser Sheikh. Separable spatiotemporal priors for convex reconstruction of timevarying 3d point clouds. In European Conference on Computer Vision, pages 204–219. Springer, 2014.
 [31] Tomas Simon, Jack Valmadre, Iain Matthews, and Yaser Sheikh. Kroneckermarkov prior for dynamic 3d reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(11):2201–2214, 2017.
 [32] Olga Sorkine. Laplacian mesh processing. In Eurographics (STARs), pages 53–70, 2005.
 [33] Philip A Tresadern and Ian D Reid. Video synchronization from human motion using rank constraints. Computer Vision and Image Understanding, 113(8):891–906, 2009.
 [34] Tinne Tuytelaars and Luc Van Gool. Synchronizing video sequences. In Proceedings of the 2004 IEEE Conference on Computer Vision and Pattern Recognition, VOL 1, volume 1, pages 762–768. Institute of Electrical and Electronics Engineers, 2004.
 [35] Jack Valmadre and Simon Lucey. General trajectory prior for nonrigid reconstruction. In Proceedings of 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1394–1401. IEEE, 2012.
 [36] Minh Vo, Srinivasa G Narasimhan, and Yaser Sheikh. Spatiotemporal bundle adjustment for dynamic 3d reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1710–1718, 2016.
 [37] Minh Vo, Ersin Yumer, Kalyan Sunkavalli, Sunil Hadap, Yaser Sheikh, and Srinivasa Narasimhan. Automatic adaptation of person association for multiview tracking in group activities. arXiv preprint arXiv:1805.08717, 2018.
 [38] Xinchao Wang, Wei Bian, and Dacheng Tao. Grassmannian regularized structured multiview embedding for image classification. IEEE Trans. Image Processing, 22(7):2646–2660, 2013.
 [39] Daniel Wedge, Du Huynh, and Peter Kovesi. Motion guided video sequence synchronization. In Asian Conference on Computer Vision, pages 832–841. Springer, 2006.
 [40] ShihEn Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4724–4732, 2016.
 [41] Jin Zeng, Jiahao Pang, Wenxiu Sun, Gene Cheung, and Ruichao Xiao. Deep graph laplacian regularization. arXiv preprint arXiv:1807.11637, 2018.
 [42] Enliang Zheng, Dinghuang Ji, Enrique Dunn, and JanMichael Frahm. Sparse dynamic 3d reconstruction from unsynchronized videos. In Proceedings of the IEEE International Conference on Computer Vision, pages 4435–4443, 2015.
 [43] Enliang Zheng, Dinghuang Ji, Enrique Dunn, and JanMichael Frahm. Selfexpressive dictionary learning for dynamic 3d reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(9):2223–2237, 2017.
 [44] Quan Zheng and David B Skillicorn. Spectral embedding of directed networks. Social Network Analysis and Mining, 6(1):76, 2016.
 [45] Yingying Zhu, M Cox, and S Lucey. 3d motion reconstruction for realworld camera motion. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE Computer Society, 2011.
 [46] Yingying Zhu and Simon Lucey. Convolutional sparse coding for trajectory reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3):529–540, 2015.
Comments
There are no comments yet.