In this presentation we will talk about (implicit) Deep Adaptive Design (DAD) - a new policy-based method for performing Bayesian experimental design in real-time. The traditional approach to adaptive experimentation is a two step procedure consisting of posterior inference followed by optimisation of the expected information gain (EIG) objective. Both of these steps usually require heavy computations during the experiment, making the traditional approach unsuitable for many real-world applications, where decisions must typically be made quickly. The DAD approach addresses this restriction by learning an amortised design network that takes past design-outcome pairs as input and outputs the design for the next stage of the experiment using a single forward pass. We illustrate the applicability of our method on a number of experiments and show that it provides fast and effective mechanism for performing adaptive experiments with a wide class of models.