Accurate state estimation of building components across large workspaces is crucial for automated construction assembly, yet construction environments are inherently indeterminate due to material variability, structural deflection and changing site conditions. This paper addresses the resulting tension between precision and adaptability by presenting a dual-camera and point-cloud fusion framework for robotic assembly of timber panels. A tripod-mounted global RGB-D camera provides continuous workspace coverage, while an eye-in-hand camera supplies high-precision local measurements when components enter its field of view. AprilTag fiducial markers on panels and the robot base support continuous self-calibration of the global camera relative to the robot, and statistical fusion combines pose estimates through distance-based confidence weighting, inverse-variance weighting and temporal outlier rejection. These fused poses are optionally refined using point cloud registration via a coarse-to-fine Iterative Closest Point (ICP) scheme. The system is embedded in a ROS-based architecture that links real-time sensing to a parametric design environment and an impedance-controlled Kuka iiwa manipulator. Experiments on the insertion of interlocking timber panels show that dual-camera fusion substantially improves consistency over global-only sensing and enables successful assembly where manual calibration or single-sensor feedback fails. The results demonstrate how balancing precision sensing with tolerance to uncertainty can support robust, adaptive robotic construction workflows.