Misinformation is an important problem but mitigators are overwhelmed by the amount of false content that is produced online every day. To assist human experts in their efforts, several projects are proposing computational methods that aim at supporting the detection of malicious content online. In the first part of the lecture, we will overview the different approaches, spanning from solutions involving humans and a crowd of users to fully automated approaches. In the second part, we will focus our attention on the data driven verification for computational fact checking. We will review methods that combine solutions from the ML and NLP literature to build data driven verification. We will also cover how the rich semantics in knowledge graphs and pre-trained language models can be used to verify claims and produce explanations, which is a key requirement in this space. Better access to data and new algorithms are pushing computational fact checking forward, with experimental results showing that verification methods enable effective labeling of claims. However, while fact checkers start to adopt some of the resulting tools, the misinformation fight is far from being won. In the last part of this lecture, we will cover the opportunities and limitations of computational methods and their role in fighting misinformation.