Bayesian statistics is controversial amongst frequentists, sometimes for purely terminological reasons, and sometimes for profound philosophical ones.
I’m going to ignore that, because sometimes it is practical to use Bayesian statistics. Even for frequentists it is sometimes refreshing to move your effort from deriving frequentist estimators for intractable models, to just using the damn Bayesian ones, which might just work.
Anyway, you can avoid learning a lot of tedious frequentist machinery by starting with a prior belief that your model isn’t too pathological for a Bayesian MCMC sampler and proceeding accordingly. (You might, of course, need to prove some horrible Bayesian MCMC convergence results instead) If it works and you are feeling fancy you might then justify your method on frequentist grounds, but then you are wiping out one interesting source of after-dinner argument.
Stan is the inference toolbox for, especially, hierarchical models.
Pymc3 is pure python, which means you don’t need C++ to fix things like you do in stan. It’s presumably generally slower than stan if you actually do real MC simulations, but I haven’t checked.
Level up you esoterism with Church, a general-purpose Turing-complete Monte Carlo lisp-derivative, which is slow as a thousand miles of baby-arse but does some reputedly cute tricks with modeling human problem-solving, according to creators Noah Goodman and Joshua Tenenbaum.
See also anglican, which is the same but different.
What is this? Something by MacKay, Langford, Shawe-Taylor and Seeger, connection to the frequentist PAC-learning paradigm.