[1] Tor Lattimore and Csaba Szepesvári. Bandit Algorithms. Cambridge University Press (draft), 2018. [ bib | .pdf ]
[2] Tor Lattimore, Branislav Kveton, Shuai Li, and Csaba Szepesvári. Toprank: A practical algorithm for online stochastic ranking. Technical report, 2018. [ bib ]
[3] Tor Lattimore and Csaba Szepesvári. Cleaning up the neighbourhood: A full classification of finite adversarial partial monitoring. Technical report, 2018. [ bib ]
[4] Joel Veness, Tor Lattimore, Avishkar Bhoopchand, Agnieszka Grabska-Barwinska, Christopher Mattern, and Peter Toth. Online learning with gated linear networks. Technical report, 2017. [ bib ]
[5] Christoph Dann, Tor Lattimore, and Emma Brunskill. Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning. In Proceedings of the 30th Conference on Neural Information Processing Systems, 2017. [ bib ]
[6] Laurent Orseau, Tor Lattimore, and Shane Legg. Soft-bayes: Prod for mixtures of experts with log-loss. In Proceedings of the 28th International Conference on Algorithmic Learning Theory, 2017. [ bib ]
[7] Tor Lattimore. A scale free algorithm for stochastic bandits with bounded kurtosis. In Proceedings of the 30th Conference on Neural Information Processing Systems, 2017. [ bib ]
[8] Tor Lattimore and Csaba Szepesvari. The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear Bandits. In Aarti Singh and Jerry Zhu, editors, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research, pages 728--737, Fort Lauderdale, FL, USA, 20--22 Apr 2017. PMLR. [ bib ]
[9] Tor Lattimore. The pareto regret frontier for bandits. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS), 2015. [ bib ]
[10] Tor Lattimore. Regret analysis of the anytime optimally confident UCB algorithm. Technical report, 2016. [ bib ]
[11] Sébastien Gerchinovitz and Tor Lattimore. Refined lower bounds for adversarial bandits. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS), 2016. [ bib ]
[12] Finnian Lattimore, Tor Lattimore, and Mark Reid. Causal bandits: Learning good interventions via causal inference. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS), 2016. [ bib ]
[13] Ruitong Huang, Tor Lattimore, András Gyögy, and Csaba Szepesvári. Following the leader and fast rates in linear prediction: Curved constraint sets and other regularities. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS), 2016. [ bib ]
[14] Aurélien Garivier, Emilie Kaufmann, and Tor Lattimore. On explore-then-commit strategies. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS), 2016. [ bib ]
[15] Jan Leike, Tor Lattimore, Laurent Orseau, and Marcus Hutter. Thompson sampling is asymptotically optimal in general environments. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI), 2016. [ bib ]
[16] Tor Lattimore and Marcus Hutter. Asymptotics of continuous Bayes for non-i.i.d. sources. Technical report, 2014. [ bib | http ]
[17] Tor Lattimore. Optimally confident UCB : Improved regret for finite-armed bandits. Technical report, 2015. [ bib | http ]
[18] Tor Lattimore. Regret analysis of the finite-horizon Gittins index strategy for multi-armed bandits. In Proceedings of Conference On Learning Theory (COLT), 2016. [ bib ]
[19] Yifan Wu, Roshan Shariff, Tor Lattimore, and Csaba Szepesvári. Conservative bandits. In Proceedings of the International Conference on Machine Learning (ICML), 2016. [ bib ]
[20] Tor Lattimore and Marcus Hutter. On Martin-löf (non-)convergence of Solomonoff's universal mixture. Theoretical Computer Science, 2014. [ bib ]
[21] Tor Lattimore, Koby Crammer, and Csaba Szepesvári. Linear multi-resource allocation with semi-bandit feedback. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS), 2015. [ bib ]
[22] Tor Lattimore and Marcus Hutter. Bayesian reinforcement learning with exploration. In Proceedings of the 25th Conference on Algorithmic Learning Theory (ALT), 2014. [ bib ]
[23] Tor Lattimore and Rémi Munos. Bounded regret for finite-armed structured bandits. In Proceedings of the 27th Conference on Neural Information Processing Systems (NIPS), 2014. [ bib ]
[24] Tor Lattimore, András György, and Csaba Szepesvári. On learning the optimal waiting time. In Proceedings of the 25th Conference on Algorithmic Learning Theory (ALT), 2014. [ bib ]
[25] Tor Lattimore, Koby Crammer, and Csaba Szepesvári. Optimal resource allocation with semi-bandit feedback. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence (UAI), 2014. [ bib ]
[26] Tom Everitt, Tor Lattimore, and Marcus Hutter. Free lunch for optimisation under the universal distribution. In Proceedings of IEEE Congress on Evolutionary Computing (CEC), 2014. [ bib ]
[27] Tor Lattimore, Marcus Hutter, and Peter Sunehag. The sample-complexity of general reinforcement learning. In Proceedings of the 30th International Conference on Machine Learning, 2013. [ bib ]
[28] Tor Lattimore, Marcus Hutter, and Peter Sunehag. Concentration and confidence for discrete bayesian sequence predictors. In Sanjay Jain, Rémi Munos, Frank Stephan, and Thomas Zeugmann, editors, Proceedings of the 24th International Conference on Algorithmic Learning Theory, pages 324--338. Springer, 2013. [ bib ]
[29] Tor Lattimore and Marcus Hutter. PAC bounds for discounted MDPs. In Nader Bshouty, Gilles Stoltz, Nicolas Vayatis, and Thomas Zeugmann, editors, Proceedings of the 23th International Conference on Algorithmic Learning Theory, volume 7568 of Lecture Notes in Computer Science, pages 320--334. Springer Berlin / Heidelberg, 2012. [ bib ]
[30] Tor Lattimore and Marcus Hutter. On Martin-Löf convergence of Solomonoff's mixture. In T-H.Hubert Chan, LapChi Lau, and Luca Trevisan, editors, Theory and Applications of Models of Computation, volume 7876 of Lecture Notes in Computer Science, pages 212--223. Springer Berlin Heidelberg, 2013. [ bib ]
[31] Tor Lattimore, Marcus Hutter, and Vaibhav Gavane. Universal prediction of selected bits. In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann, editors, Proceedings of the 22nd International Conference on Algorithmic Learning Theory, volume 6925 of Lecture Notes in Computer Science, pages 262--276. Springer Berlin / Heidelberg, 2011. [ bib ]
[32] Tor Lattimore and Marcus Hutter. No free lunch versus Occam's razor in supervised learning. In David Dowe, editor, Algorithmic Probability and Friends. Bayesian Prediction and Artificial Intelligence, volume 7070 of Lecture Notes in Computer Science, pages 223--235. Springer Berlin Heidelberg, 2013. [ bib ]
[33] Tor Lattimore and Marcus Hutter. Asymptotically optimal agents. In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann, editors, Proceedings of the 22nd International Conference on Algorithmic Learning Theory, volume 6925 of Lecture Notes in Computer Science, pages 368--382. Springer Berlin / Heidelberg, 2011. [ bib ]
[34] Tor Lattimore and Marcus Hutter. Time consistent discounting. In Jyrki Kivinen, Csaba Szepesvári, Esko Ukkonen, and Thomas Zeugmann, editors, Proceedings of the 22nd International Conference on Algorithmic Learning Theory, volume 6925 of Lecture Notes in Computer Science, pages 383--397. Springer Berlin / Heidelberg, 2011. [ bib ]
[35] Laurent Orseau, Tor Lattimore, and Marcus Hutter. Universal knowledge-seeking agents for stochastic environments. In Sanjay Jain, Rémi Munos, Frank Stephan, and Thomas Zeugmann, editors, Proceedings of the 24th International Conference on Algorithmic Learning Theory, volume 8139 of Lecture Notes in Computer Science, pages 158--172. Springer Berlin Heidelberg, 2013. [ bib ]
[36] Tor Lattimore and Marcus Hutter. General time consistent discounting. Theoretical Computer Science, 519(0):140 -- 154, 2014. [ bib ]

This file was generated by bibtex2html 1.98.