Profile

I am a Ph.D. student with Computer Architecture and Systems Lab. (CASL) at Yonsei University, supervised by Prof. William J. Song. I major in Electrical and Electronic Engineering. My research interests include but not limited to: GPU microarchitecture for deep neural networks and graph problems, high-performance memory systems, reliability management, etc.

Experiences

  • Research Assistant at Computer Architecture and Systems Lab., School of Electrical and Electronic Engineering, Yonsei University (2018.03 - Present)
  • Undergraduate Intern at Computer Architecture and Systems Lab., School of Electrical and Electronic Engineering, Yonsei University (2017.07 - 2018.02)

Education

  • Ph.D., in Electrical and Electronic Engineering, Yonsei University (2018.03 - Present)
  • B.S., in Electrical and Electronic Engineering, Yonsei University (2012.03 - 2018.02)

Publications

[6] T. Lim, H. Kim, J. Park, B. Kim and W. J. Song, "RoTA: Rotational Torus Accelerator for Wear Leveling of Neural Processing Elements," Design, Automation and Test in Europe Conference (DATE), Mar. 2025.

[5] T. Lim, H. Kim, J. Park, B. Kim and W. J. Song, "Wear Leveling of Processing Element Array in Deep Neural Network Accelerators," ACM/IEEE Design Automation Conference (DAC) Work-in-Progress Poster, July 2023.

[4] H. Kim, and W. J. Song, "LAS: Locality-Aware Scheduling for GEMM-Accelerated Convolutions in GPUs," IEEE Transactions on Parallel and Distributed Systems (TPDS), vol. 34, no. 5, pp.1479-1494, May 2023.

[3] Y. Kim, H. Kim, and W. J. Song, "NOMAD: Enabling Non-blocking OS-managed Cache via Tag-Data Decoupling," IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp.193-205, Feb. 2023.

[2] B. Kim, S. Lee, C. Park, H. Kim, and W. J. Song, "The Nebula Benchmark Suite: Implications of Lightweight Neural Networks," IEEE Transactions on Computers (TC), vol. 70, no. 11, pp.1887-1900, Nov. 2021.

[1] H. Kim, S. Ahn, Y. Oh, B. Kim, W. W. Ro, and W. J. Song, "Duplo: Lifting Redundant Memory Accesses of Deep Neural Networks for GPU Tensor Cores," IEEE/ACM International Symposium on Microarchitecture (MICRO), pp.365-379, Oct. 2020.

Patents

[3] "DRAM Cache System and Operating Method of the Same", US Patent Application No. 18/627,459

[2] "Neural Network Accelerator and Method of Controlling Same", US Patent Application No. 18/435,422

[1] "Operation Device of Convolutional Neural Network, Operation Method of Convolutional Neural Network and Computer Program Stored in a Recording Medium to Execute the Method Thereof," US Patent Application No. 17/752,235

Teaching

    Teaching assistant @ Yonsei University
  • EEE2020: Data Structure and Algorithm, Spring 2018
  • EEE3530: Operating Systems, Fall 2022, Fall 2023
  • EEE3535: Computer Architecture, Spring 2023
  • EEE4610: Senior Projects, Spring 2018, Spring 2022, Fall 2023
  • EEE5601: Advanced Computer Architecture, Spring 2022

Recognition

  • National Science and Technology Scholarship
    Mar. 2012, Korea Student Aid Foundation (KOSAF)
  • Highest Honors Student Award
    Mar. 2017, Yonsei University
  • Graduate School Admission Scholarship
    Mar. 2018, Yonsei University

Technical Skills

  • Programming Language: C/C++, CUDA, Python
  • Simulation Tools: Accel-sim, GPGPU-sim, gem5, DRAMSim
  • Frameworks: PyTorch