In a computer system with a hierarchical structure, storage is the main bottleneck of the system’s performance. Therefore, an efficient data management policy can improve the performance. Intrinsically, data are categorized into hot and cold according to their access frequency. Various studies have been conducted to efficiently manage hot data for better performance. However, previous studies incur runtime computational and memory overhead and performance degradation due to faulty classification. In this paper, we propose a novel data placement scheme for storage systems considering file system I/O characteristics. In the file system’s perspective, meta and journal data are hot data as they are more frequently accessed than user data. Semantic structures of a file system lead to unique disk I/O patterns. Using file system’s semantic information, the proposed scheme discerns hot data, including meta and journal data. Then, hot data are placed into the same block in SSDs to reduce garbage collection overhead and into the middle of the track in hard disk drives to reduce seek time. As the hot data classification is inferred from the immovable file system structure, the proposed scheme is free from faulty classification. The proposed scheme is implemented in QEMU with an SSD simulator called FEMU, and demonstrated by three benchmarks, namely Postmark, YCSB, and Filebench. The results showed that our scheme improved the storage system performance up to 173.81% with an average of 47.62% in EXT4 and up to 19.79% with an average of 7.27% in XFS. Also, the results showed that the scheme mitigated Write Amplification Factor for a longer lifetime of SSDs. In detail, erase operations were decreased up to 22.31% with an average of 10.36% in EXT4, and up to 4.58% with an average of 2.5% in XFS. The number of valid data copy in a victim block decreased up to 73.07% with an average of 29.62% in EXT4 and up to 49% with an average of 13.45% in XFS. The contribution of this paper is as follows. First, we demonstrated that each file system had a unique I/O pattern and meta and journal data were more frequently accessed than user data. Second, for a better performance and lifetime of the SSDs, we proposed a data placement policy that exploited semantic information of a file system. Third, we showed that the policy incurred negligible runtime overhead, therefore, it could be executed inside the SSDs without host-side support.