Techniques for estimating depth map from monocular images have long been studied. The depth maps are important to understand geometric relationships within the scene which can be used for object detection, 3D modeling, augmented reality and can potentially be inferred when obscuration occurs between objects. The estimation of the depth of an object is a key part of the field of computer vision and it is essential for numerous applications. Research on searching for image-level information and hierarchical characteristics has steadily been conducted using deep-learning. However, these methods have limitations in measuring depth and detecting forward objects at night and in shadowed environments. In this paper, we propose a new method to overcome these limitations. The proposed method uses Vision Transformer (ViT) to automatically focus objects in images and measure depth maps through three different new modules: First, as Reconstitution module, the representation of the image is reconstructed, and Fusion module fuses and upsamples represented it for more detailed prediction. This can reduce the loss generated in the process of generating the depth map. In addition, it was confirmed through experiments that a cleaner and more accurate depth map was created by fine-tuning it by patch unit. This can be used in various environments and it has shown excellent results through quantitative and qualitative evaluation.