You can right click a successful request in your browser's network devtools to copy the request and all its parameters (headers, cookies, querystring, user-agent, ...)
start with mimicking the browsers behavior the best you can, then you can narrow down what the deciding parameters are.
I've never needed selenium etc. for the web scraping I do, it's often sufficient to just use fetch() in js or requests in python.
i once even had to bypass a captcha from cloudflare, i just solved it manually and the token was valid for a month or so