This dissertation explores how to improve estimation or prediction for a target dataset by learning from several related datasets -- a strategy known as transfer learning . While transfer learning has shown promise across many fields, most existing methods rely on assumptions such as independence among datasets. In real data, especially for time series like financial data, this assumption often fails. For example, financial time series frequently exhibit local trends and extreme observations, making prediction more challenging. To address these issues, we evaluate the performance of existing transfer learning methods on dependent and heavy-tailed time series data. We also introduce a new screening procedure to identify which related time series are beneficial for transfer learning and which may cause negative transfer, where adding extra data worsens estimation or prediction. The proposed screening method is computationally efficient, leverages the dynamic structure of the time series data, and bypasses limitations of traditional approaches that assume independence. We illustrate the proposed methods using a panel of stock price data that underlies the calculation of the Dow Jones index. Our results show that, when carefully adapted to the structure of time series data, transfer learning can substantially improve forecasting accuracy in real-world applications.