|
The approach of database searching is one of the popular research methods in computer science. However, along with the clouding of information data, the huge data always spends lots of computing resources. Therefore, the research raises for how to accelerate dealing with big data. According to the popular general-purpose computing on graphics processing units (GPGPU), the characterization of high calculation efficiency which dues to the parallel infrastructure, we arranged the big data in the database to different process units and then computed parallelly and compared using high speed computing structure of GPU. Thereafter, we output the results and raised the efficiency of searching efficiently. Least Recently Used algorithm, LRU, is used widely in cache technology. It raises the data search efficiency by creating a cache space in target database and setting the priority to supply data to users immediately. However, it spends lots of system resource relatively. In this thesis, we proposed a parallel technology of multi-core fast and parallel computing characterization of GPU structure to replace the traditional individually comparison searching mechanism. In this research, we realize the parallelized approach to search database with big data by using Compute Unified Device Architecture, CUDA, NVIDIA. It created an table at first to accelerate the search by using LRU algorithm. We simulate the real large database searching and cache technology using GPU without the consideration of the priority of access time. According to the experimental results, the searching time was promoted greatly compared to the non-GPU algorithm when the amount of data and the size of cache reaching some level
|