Mobile QR Code QR CODE

  1. (School of Electrical and Electronic Engineering, Yonsei University / Seoul, Korea {mashimaro13, doublej1412, cylee}@yonsei.ac.kr )



Beam tracking, Long short-term memory (LSTM), Mobility embedding, Unmanned aerial vehicle (UAV)

1. Introduction

Unmanned aerial vehicles (UAVs) are one of the most commonly investigated technologies in recent literature [1-4]. UAVs are envisioned as one of the key enablers of seamless connectivity in beyond fifth-generation (B5G) communication systems and non-terrestrial networks (NTNs) [4]. It is anticipated that UAVs can be deployed to serve different roles in wireless communication systems, such as wireless power transfer [3], aerial relay [5], aerial user equipment [6], and aerial base stations [7]. The maneuverability of UAVs is the key to various possibilities in wireless communication systems, but it is also a major challenge that must be overcome. A wireless communication channel between a ground base station (GBS) and a UAV is expected to exhibit different characteristics compared to that of a terrestrial network, including dominant and strong line-of-sight (LOS) paths and high variability.

Modern communication systems require higher data rates over mmWave and higher frequency-band channels. Accordingly, beamforming technique is essential to cope with the high attenuation of radio frequency (RF) signals and increase the link efficiency. In mmWave communication systems, narrow beams generated by massive antenna arrays suffer from sharp drops in beamforming gain, even with a small amount of misalignment.

Recently, the beam tracking technique has gained extensive research interest in the area. Some studies [8-10] consider beam tracking in terrestrial networks. However, the channel models in UAV communications must be separately considered since they differ from those in terrestrial networks due to high LOS probability and variability. Other studies [1, 11-13] consider beam tracking techniques in UAV communications.

The beam tracking scheme in [11] was based on an extended Kalman filter (EKF), which can operate in environments of known mobility status. In other studies [12,13], the authors proposed a beam tracking scheme based on beam training sessions and codebooks, which require additional resources and overhead. The pilot-based 3D beam tracking scheme [1] is proposed for UAV communications. However, the performance of the pilot-based scheme can be degraded by the pilot overhead, especially when beam tracking is needed at both the transmitter and receiver sides. There have also been efforts to employ deep learning in modern communication systems to capture the time-varying environmental features.

A deep learning-aided channel prediction scheme [14] is proposed for users with mobility. The channel state information (CSI) prediction schemes [15,16] estimate the future CSI based on the acquired CSI.

In this paper, we propose a pairwise 3D beam tracking scheme in an air-to-ground (A2G) communication system where both the UAV and GBS conduct beam tracking to communicate with each other. In the considered scenario, each communication node is equipped with a long short-term memory (LSTM)-based mobility network (MoNet) and determines the future beamforming and combiner vectors. The MoNet uses the received signal for beam tracking operation, rather than implementing beam training stages. As a result, each communication node can predict the future beamforming vectors with high accuracy at every time sample. The proposed MoNet then recursively uses the predicted beamforming vectors as the next input to obtain the next beamforming vectors. Simulation results show that the proposed pairwise beam tracking scheme not only predicts the future beamforming vectors with high accuracy and achieves high communication rates due to the increased beamforming gain.

2. System Model

We considered an A2G communication system with a GBS and a cellular-connected UAV as shown in Fig. 1. Both the GBS and the UAV are equipped with a uniform planar array (UPA) with B = B$_{x}$B$_{y}$ and U = U$_{x}$U$_{y}$ elements, respectively. We assume a Rician-fading channel at sample time t, which can be expressed as

(1)
$ \mathbf{H}_{x,t}=\sqrt{\frac{P_{0}}{D_{t}^{\alpha }}}\left(\sqrt{\frac{K}{K+1}}\overline{\mathbf{H}}_{x,t}+\sqrt{\frac{1}{K+1}}\overset{˜}{\mathbf{H}}_{x,t}\right), $

where D$_{t}$ is the distance between the GBS and UAV, P$_{0}$ is the channel gain at unit distance D$_{t}$ = 1 m, ${\alpha}$ is the path loss exponent, K is the Rician factor, $\overline{\mathbf{H}}_{x,t}$ is the LOS channel, and $\overset{˜}{\mathbf{H}}_{x,t}$ is the non-line-of-sight (NLOS) channel, of which the elements follow a zero-mean circulant symmetric complex Gaussian (ZMCSCG) distribution with unit variance. The UAV-to-GBS LOS channel is modeled as:

Fig. 1. Illustration of A2G communication system.
../../Resources/ieie/IEIESPC.2023.12.3.269/fig1.png
(2)
$ \overline{\mathbf{H}}_{bu,t}=\mathbf{a}_{b}\left(\phi _{b,t},\theta _{b,t}\right)^{H}\mathbf{a}_{u}\left(\phi _{u,t},\theta _{u,t}\right), $

while the GBS-to-UAV LOS channel is:

(3)
$ \overline{\mathbf{H}}_{ub,t}=\mathbf{a}_{u}\left(\phi _{u,t},\theta _{u,t}\right)^{H}\mathbf{a}_{b}\left(\phi _{b,t},\theta _{b,t}\right), $

where $\phi _{u,t}$, $\theta _{u,t}$, $\phi _{b,t}$, and $\theta _{b,t}$ are the azimuth and elevation angles of the UAV and the GBS, and $\mathbf{a}_{u}\left(\cdot \right)$ and $\mathbf{a}_{b}\left(\cdot \right)$ are the array response vectors of the UAV and the GBS. The UPA array response vector at each node is given by:

(4)
$ \begin{array}{l} \mathbf{a}_{x}\left(\phi ,\theta \right)=\left[1,\ldots , e^{j\frac{2\pi }{\lambda }d\left\{\left(m_{x}-1\right)\sin \theta \sin \phi +\left(m_{y}-1\right)\cos \theta \right\}},\right.\\ \left.\ldots ,e^{j\frac{2\pi }{\lambda }d\left\{\left(M_{x}-1\right)\sin \theta \sin \phi +\left(M_{y}-1\right)\cos \theta \right\}}\right]^{H}, \end{array} $

where ${\lambda}$ is the wavelength, d is the element spacing of half-wavelength, and M$_{x}$ and M$_{y}$ are the numbers of elements at x and y axes. Assuming that the GBS and the UAV are communicating with each other, the received signals at the GBS and the UAV are written as:

(5)
$\begin{align} \mathbf{y}_{b,t}&=\mathbf{H}_{bu,t}\mathbf{f}_{u,t}s_{u,t}+\mathbf{n}_{b,t},\end{align} $
(6)
$\begin{align} \mathbf{y}_{u,t}&=\mathbf{H}_{ub,t}\mathbf{f}_{b,t}s_{b,t}+\mathbf{n}_{u,t}, \end{align} $

where $\mathbf{f}_{u,t}$ and $\mathbf{f}_{b,t}$ denote the beamforming vectors, $s_{u,t}$ and $s_{b,t}$ are the transmit signals with unit power, and $\mathbf{n}_{u,t}$ and $\mathbf{n}_{b,t}$ are the additive Gaussian noise vectors of the UAV and the GBS with variances $\sigma _{u}^{2}$ and $\sigma _{b}^{2}$, respectively. After applying combiners, the processed signals are given by:

(7)
$\begin{align} r_{b,t}&=\mathbf{w}_{b,t}^{H}\mathbf{H}_{bu,t}\mathbf{f}_{u,t}s_{u,t}+\overset{˜}{n}_{b,t},\end{align} $
(8)
$\begin{align} r_{u,t}&=\mathbf{w}_{u,t}^{H}\mathbf{H}_{ub,t}\mathbf{f}_{b,t}s_{b,t}+\overset{˜}{n}_{u,t}, \end{align} $

where $\mathbf{w}_{u,t}$ and $\mathbf{w}_{b,t}$ denote the combiner vectors of the UAV and the GBS, $\overset{˜}{n}_{b,t}=\mathbf{w}_{b,t}^{H}\mathbf{n}_{b,t}$, and $\overset{˜}{n}_{u,t}=\mathbf{w}_{u,t}^{H}\mathbf{n}_{u,t}$. The received signal-to-noise ratios (SNRs) at the GBS and the UAV are written as:

(9)
$\begin{align} \gamma _{b,t}&=\frac{\left| \mathbf{w}_{b,t}^{H}\mathbf{H}_{bu,t}\mathbf{f}_{u,t}\right| ^{2}}{\sigma _{b}^{2}},\end{align} $
(10)
$\begin{align} \gamma _{u,t}&=\frac{\left| \mathbf{w}_{u,t}^{H}\mathbf{H}_{ub,t}\mathbf{f}_{b,t}\right| ^{2}}{\sigma _{u}^{2}}, \end{align} $

In this paper, we consider the beamforming and combining vectors to be a UPA array response vector at a specific angle: $\mathbf{f}_{x,t}=\mathbf{w}_{x,t}=\mathbf{a}_{x}\left(\phi _{x,t},\theta _{x,t}\right)/\sqrt{M_{x}M_{y}}\,.$ Accordingly, the achievable rate is:

(11)
$ R_{x,t}=\log _{2}\left(1+\gamma _{x,t}\right). $

3. The Proposed Pairwise Beam Tracking Scheme

In multi-input multi-output (MIMO) systems, both sides of communicating nodes are equipped with array antennas to implement beamforming and combining. However, downlink beam tracking schemes are usually considered in conventional schemes. Also, for pilot-based schemes, the resources required for pilot transmissions need to be doubled for naive extension of existing techniques.

We propose a pairwise beam tracking scheme, where communicating nodes on both sides participate in the beam tracking with the aid of MoNet. Given that each communication node in the considered A2G communication system is equipped with each MoNet, the nodes can communicate with each other and simultaneously operate in a real-time beam tracking protocol. We first describe the structure and training scheme of MoNet, which outputs the predicted angles. A real-time beam tracking protocol is proposed, which does not require any reference signals or pilots and recursively uses the previous predictions as future inputs.

3.1 LSTM-based MoNet

Using (5) and (6), the GBS and the UAV predict the future azimuth and elevation angles to determine the beamforming and combining vectors. The proposed beam tracking model requires Q consecutive input samples to capture the time-series characteristics for the prediction task. The input samples required at each communication node can be written as

(12)
$ \mathbf{Y}_{x,t}=\left[\mathbf{y}_{x,t-Q-1},\mathbf{y}_{x,t-Q-2},\ldots ,\mathbf{y}_{x,t-q},\ldots ,\mathbf{y}_{x,t}\right], $

where $\mathbf{y}_{x,q}$ denotes the q-th received signal sample, which can be either (5) or (6). As illustrated in Fig. 2, the beam tracking model consists of two parts: a feature extractor and a feature decoder. The feature extractor processes the input samples into the mobility embedding $e_{x,t}$ as follows:

(13)
$ e_{x,t}=\mathcal{F}_{x}\left(\mathbf{Y}_{x,t}\right), $

where $\mathcal{F}_{x}\left(\cdot \right)$ is the feature extractor. The feature extractor is composed of layers of LSTM to capture the time-series characteristics of the input samples and outputs the mobility embedding, which contains the time-correlated feature of the input samples. The extracted mobility embedding is then decoded by the feature decoder, which can be written as:

(14)
$$ \left(\hat{\phi}_{x, t+1}, \hat{\theta}_{x, t+1}\right)=\mathrm{G}_x\left(\tilde{e}_{x, t}\right), $$

where $\hat{\phi }_{x,t+1}$ and $\hat{\theta }_{x,t+1}$ denote the predicted azimuth and elevation angles, $\mathrm{G}_x(\cdot)$ is the feature decoder, and $\overset{˜}{e}_{x,t}$ is the mobility embedding after rectified linear unit (ReLU) activation. The feature decoder is composed of fully connected layers with parametric ReLU (PReLU) activation in between. The proposed model is trained to jointly minimize the mean squared error (MSE) between the actual angles:

(15)
$ \mathcal{L}_{MSE}=\mathcal{E}\left\{\left| \left(\phi _{x,n}-\hat{\phi }_{x,n}\right)\right| ^{2}+\left| \left(\theta _{x,n}-\hat{\theta }_{x,n}\right)\right| ^{2}\right\}, $

where $\mathcal{E}\left\{\cdot \right\}$ denotes the expectation operator.

The feature extraction performance of the deep learning-based beam tracking model can be shown using t-distributed stochastic neighbor embedding (t-SNE). We conducted a toy simulation in a circular trajectory as illustrated in Fig. 3. As the UAV moves along the circular trajectory, the GBS receives the signals from the UAV as in (5) as an input to the feature extractor. In Fig. 3, the time-series mobility embeddings extracted by the GBS are visualized in the right figure, where the colors represent the sample time that corresponds to that of the left figure. As seen in the figure, the mobility embeddings are continuous in the three-dimensional latent space, which means the beam tracking model of GBS succeeded in capturing the signal characteristics that change during the continuous movement of the UAV.

Fig. 2. Architecture of LSTM-based MoNet.
../../Resources/ieie/IEIESPC.2023.12.3.269/fig2.png
Fig. 3. Example of mobility embedding in the circular trajectory. Illustration of circular trajectory (left) and t-SNE plot for corresponding mobility embeddings (right).
../../Resources/ieie/IEIESPC.2023.12.3.269/fig3.png

3.2 Real-time Beam Tracking Protocol

At each communication node, the output of the beam tracking model can be used to predict the beamforming and combining vectors $\hat{\mathbf{f}}_{x,t+1}$ and $\hat{\mathbf{w}}_{x,t+1}$. Assuming that the GBS and the UAV are communicating with each other and applies the predicted beamforming vectors at sample time t+1, the received signal model can be written as:

(16)
$\begin{align} \hat{\mathbf{y}}_{b,t+1}&=\mathbf{H}_{bu,t+1}\hat{\mathbf{f}}_{u,t+1}s_{u,t+1}+\mathbf{n}_{b,t+1},\end{align} $
(17)
$\begin{align} \hat{\mathbf{y}}_{u,t+1}&=\mathbf{H}_{ub,t+1}\hat{\mathbf{f}}_{b,t+1}s_{b,t+1}+\mathbf{n}_{u,t+1}, \end{align} $

where $\hat{\mathbf{f}}_{x,t+1}=\mathbf{a}_{x}\left(\hat{\phi }_{x,t+1},\hat{\theta }_{x,t+1}\right)/\sqrt{M_{x}M_{y}}$. After applying the combining vectors, the received signals are:

(18)
$\begin{align} r_{b,t+1}&=\hat{\mathbf{w}}_{b,t+1}^{H}\mathbf{H}_{bu,t+1}\hat{\mathbf{f}}_{u,t+1}s_{u,t+1}+\overset{˜}{n}_{b,t+1},\end{align} $
(19)
$\begin{align} r_{u,t+1}&=\hat{\mathbf{w}}_{u,t+1}^{H}\mathbf{H}_{ub,t+1}\hat{\mathbf{f}}_{b,t+1}s_{b,t+1}+\overset{˜}{n}_{u,t+1}, \end{align} $

where $\hat{\mathbf{w}}_{x,t+1}=\hat{\mathbf{f}}_{x,t+1}$ and $\overset{˜}{n}_{x,t+1}=\hat{\mathbf{w}}_{x,t+1}^{H}\mathbf{n}_{x,t+1}$. Then, to predict $\hat{\phi }_{x,t+2}$ and $\hat{\theta }_{x,t+2}$, (16) and (17) are recursively included as new input samples to the previous input sample as:

(20)
$\mathbf{Y}_{x,t+1}=\left[\mathbf{y}_{x,t-Q-2},\mathbf{y}_{x,t-Q-3},\ldots ,\mathbf{y}_{x,t-q},\ldots ,\mathbf{y}_{x,t},\hat{\mathbf{y}}_{x,t+1}\right]$.

The pairwise beam tracking scheme is summarized in Algorithm 1.

../../Resources/ieie/IEIESPC.2023.12.3.269/al1.png

4. Numerical Results

In the simulation, the UAV is assumed to be moving in a quasi-static random walk model and randomly changes its velocity at every 5 sample times. The speed of the UAV is assumed to be uniformly random in the range of [20,35] m/s, and the direction of movement is also uniformly random in the range of [$-$${\pi}$/6, ${\pi}$/6]. The total trajectory length is set to 120 seconds.

The operating altitude of the UAV is constant at 100m.

The GBS and the UAV are communicating with each other with a carrier frequency of 30 GHz and are equipped with UPAs with B$_{x}$ = B$_{y}$ = 8 and U$_{x}$ = U$_{y}$ = 4 elements, respectively. For the channel parameters, the path loss exponent is ${\alpha}$ = 3, and the Rician factor is set to K = 15 dB. The time interval for each sample is set to 20 ms and the number of input samples is \ul{Q} = 128.

The received SNR is set to 20 dB unless stated otherwise. For the pilot-based reference schemes, we assumed that the time interval between two pilots is 200 ms. The feature extractor is composed of 4 layers of LSTM, and the feature decoder has 3 fully connected layers. The beam tracking model was trained with an adaptive moment estimation (Adam) optimizer and a learning rate of $10^{-4}$.

The instantaneous normalized beamforming gain can be calculated as:

(21)
$\begin{align} G_{b,t}&=\frac{\left| \mathbf{w}_{b,t}^{H}\mathbf{H}_{bu,t}\mathbf{f}_{u,t}\right| ^{2}}{\left\| \mathbf{H}_{bu,t}\right\| _{F}^{2}}, \end{align} $
(22)
$\begin{align} G_{u,t}&=\frac{\left| \mathbf{w}_{u,t}^{H}\mathbf{H}_{ub,t}\mathbf{f}_{b,t}\right| ^{2}}{\left\| \mathbf{H}_{ub,t}\right\| _{F}^{2}}, \end{align} $

where $G_{b,t}=G_{u,t}$ at any sample time t. In the dynamic pilot scheme [1], pilot transmission for beam search takes place when the beamforming gain is below the threshold or the time since the last beam search reaches the maximum time interval. For the periodic pilot scheme, we do not activate the beam search condition for the beamforming gain threshold. For the dynamic pilot scheme, we set the gain threshold for pilot transmission to 0.8, and the time interval between beam detection of pilots was 200 ms. L denotes the maximum time interval for pilot transmission. For the periodic pilot scheme, L is the pilot transmission periodicity. A smaller L assures that the beamforming gain does not degrade or age over time while increasing the pilot overhead. The simulation was conducted using an unseen trajectory to test the generalizability of the proposed model.

Fig. 4 shows the real-time normalized beamforming gain performance of the proposed scheme. It is shown that the proposed model is capable of maintaining high beamforming gain without severe gain drops that conventional pilot-based schemes have. The stability of the proposed model is also demonstrated in the cumulative distribution function (CDF) of the normalized beamforming gain in Fig. 5. Unlike conventional pilot-based beam tracking schemes, the proposed beam tracking scheme predicts the future beamforming vectors in a real-time fashion and prevents the performance degradation due to the channel aging and fluctuation.

In Fig. 6, the average achievable rate performance of different beam tracking schemes is demonstrated. In highly variable environments, conventional schemes fail to maintain a certain level of beamforming gain, while the proposed beam tracking scheme succeeds in maintaining high beamforming gains. There is also additional performance degradation in conventional schemes due to the pilot overhead, but the performance of the proposed beam tracking scheme is not affected.

Fig. 4. Real-time normalized beamforming gain.
../../Resources/ieie/IEIESPC.2023.12.3.269/fig4.png
Fig. 5. Cumulative distribution function (CDF) of normalized beamforming gain.
../../Resources/ieie/IEIESPC.2023.12.3.269/fig5.png
Fig. 6. Average achievable rate versus SNR for proposed beam tracking scheme and conventional schemes.
../../Resources/ieie/IEIESPC.2023.12.3.269/fig6.png

5. Conclusion

We proposed a real-time pairwise beam tracking protocol using a deep-learning aided beam tracking model. The proposed beam tracking scheme only uses the received signal samples to predict the beamforming vectors. The simulation results showed that the proposed beam tracking scheme is able to maintain high beamforming gain compared to conventional pilot-based schemes. Moreover, the proposed scheme achieves superior communication rate performance due to higher beamforming gain performance.

ACKNOWLEDGMENTS

This work was supported by the National Research Foundation of Korea (NRF) grant, which is funded by the Korean government (MSIT) (NRF-2022R1A2C1011443).

REFERENCES

1 
Y. Huang, Q. Wu, T. Wang, G. Zhou, and R. Zhang, "3D beam tracking for cellular-connected UAV," IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 736-740, May 2020.DOI
2 
Y. Zeng, Q. Wu, and R. Zhang, "Accessing from the sky: A tutorial on UAV communications for 5G and beyond," in Proc. IEEE, vol. 107, no. 12, pp. 2327-2375, Dec. 2019.DOI
3 
S. Ku, S. Jung, and C. Lee, "UAV trajectory design based on reinforcement learning for wireless power transfer," in Proc. 34th Int. Tech. Conf. Circuits/Syst. Comput. Commun. (ITC-CSCC), 2019, pp. 1-3.DOI
4 
Y. Zeng, R. Zhang, and T. J. Lim, "Wireless communications with unmanned aerial vehicles: Opportunities and challenges," IEEE Commun. Mag., vol. 54, no. 5, pp. 36-42, May 2016.DOI
5 
Q. Huang, M. Lin, T. A. Tsiftsis, J.-B. Wang, and J. Wang, "Energy efficient beamforming schemes for satellite-aerial-terrestrial networks," IEEE Trans. Commun., vol. 68, no. 6, pp. 3863-3875, Jun. 2020.DOI
6 
C. Liu, W. Yuan, Z. Wei, X. Liu, and D. W. K. Ng, "Location-aware predictive beamforming for UAV communications: A deep learning approach," IEEE Wireless Commun. Lett., vol. 10, no. 3, pp. 668-672, Mar. 2021.DOI
7 
L. Zhu, J. Zhang, Z. Xiao, X. Cao, D. O. Wu, and X.-G. Xia, "3-D beamforming for flexible coverage in millimeter-wave UAV communications," IEEE Wireless Commun. Lett., vol. 8, no. 3, pp. 837-840, Jun. 2019.DOI
8 
S. H. Lim, S. Kim, B. Shim, and J. W. Choi, "Deep learning-based beam tracking for millimeter-wave communications under mobility," IEEE Trans. Commun., vol. 69, no. 11, pp. 7458-7469, Nov. 2021.DOI
9 
F. Liu, P. Zhao, and Z. Wang, "EKF-based beam tracking for mmWave MIMO systems," IEEE Commun. Lett., vol. 23, no. 12, pp. 2390-2393, Dec. 2019.DOI
10 
V. Va, H. Vikalo, and R. W. Heath, "Beam tracking for mobile millimeter wave communication systems," in Proc. IEEE Global Conf. Signal Inf. Process. (GlobalSIP), 2016, pp. 743-747.DOI
11 
H. -L. Song and Y. -C. Ko, "Robust and low complexity beam tracking with monopulse signal for UAV communications," IEEE Trans. Veh. Technol., vol. 70, no. 4, pp. 3505-3513, April 2021.DOI
12 
W. Zhang, W. Zhang, and J. Wu, "UAV beam alignment for highly mobile millimeter mave communications," IEEE Trans. Veh. Technol., vol. 69, no. 8, pp. 8577-8585, Aug. 2020.DOI
13 
L. Yang and W. Zhang, "Beam tracking and optimization for UAV communications," IEEE Trans. Wireless Commun., vol. 18, no. 11, pp. 5367-5379, Nov. 2019.DOI
14 
C. Eom and C. Lee, "Hybrid neural network-based fading channel prediction for link adaptation," IEEE Access, vol. 9, pp. 117257-117266, 2021.DOI
15 
S. H. Lim, S. Kim, B. Shim, and J. W. Choi, "Deep learning-based beam tracking for millimeter-wave communications under mobility," IEEE Trans. Commun., vol. 69, no. 11, pp. 7458-7469, Nov. 2021.DOI

Author

Seokju Kim
../../Resources/ieie/IEIESPC.2023.12.3.269/au1.png

Seokju Kim is received the B.S. degree in Electrical and Electronic Engineering from Yonsei University, Seoul, Republic of Korea, in 2020. He is currently working toward the Integrated M.S. and Ph.D. degree in Electrical and Electronic Engineering of Yonsei University. His research interests include wireless communication system, multiple-input and multiple-output system, UAV communications and machine learning.

Jeongjoon Lee
../../Resources/ieie/IEIESPC.2023.12.3.269/au2.png

Jeongjoon Lee is received the B.S. degree in Electrical and Electronic Engineering from Yonsei University, Seoul, Republic of Korea, in 2021. He is currently working toward the Integrated M.S. and Ph.D. degree in Electrical and Electronic Engineering of Yonsei University. His research interests include wireless communication system, multiple-input and multiple-output system, satellite communication and machine learning.

Chungyong Lee
../../Resources/ieie/IEIESPC.2023.12.3.269/au3.png

Chungyong Lee received the B.S. and M.S. degrees in electronic engineering from Yonsei University, Seoul, South Korea, in 1987 and 1989, respectively, and the Ph.D. degree in electrical and computer engineering from Georgia Institute of Technology, Atlanta, GA, USA, in 1995. From 1996 to 1997, he was a Senior Engineer with Samsung Electronics Company, Ltd., Kiheung, South Korea. Since 1997, he has been with the School of Electrical and Electronic Engineering, Yonsei University, where he is currently a Professor. His research interests include array signal processing and communication signal processing.