Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a way to associate context with requests #836

Closed
yrro opened this issue Mar 20, 2016 · 4 comments
Closed

Provide a way to associate context with requests #836

yrro opened this issue Mar 20, 2016 · 4 comments

Comments

@yrro
Copy link

yrro commented Mar 20, 2016

Long story short

I'm trying to fetch from a number of URLs. If the requests succeed, I'm using the request URL as a key to look up contextual data about the request, to be used along with the fetched data. But if the response is a redirect, I have to handle looking into the history to determine the original URL. This is possible but inconvenient.

If the requests fail, I can catch the exception but I've no way to access the original request, so I can't look up the data.

Expected behaviour

I'm not sure what I want the solution to look like, but I'd like some way of associating contextual data with a request so that it can be retrieved once the request is complete, regardless of whether it succeeded or failed.

Steps to reproduce

import aiohttp
import asyncio

async def foo(session):
    urls = {'http://site.example/{}'.format(i): {'data': i} for i in range(3)}
    for resp_f in asyncio.as_completed([session.get(url) for url in urls.keys()]):
        try:
            async with await resp_f as resp:
                data = await resp.read()
        except Exception as e:
            # no way to get the original URL to look up in urls mapping
            print(str(e) + ' when fetching data for ?')                                     
        else:
            # will throw KeyError if there was a redirect, unless I use the
            # url from the last history entry
            print('fetched ' + str(len(data)) + ' bytes for ' + str(urls[resp.url]['data']))

loop = asyncio.get_event_loop()
with aiohttp.ClientSession() as session:
    loop.run_until_complete(foo(session))
loop.close()

Your environment

python 3.5.1+ on Debian stretch
aiohttp 0.21.4

@popravich
Copy link
Member

Hi, I think it's pretty trivial as you can simply replace session.get(url) with something like this:

async def foo(session):

    async def get(original_url):
        resp = await session.get(original_url)
        return original_url, resp

    urls = {'http://site.example/{}'.format(i): {'data': i} for i in range(3)}
    for resp_f in asyncio.as_completed([get(url) for url in urls.keys()]):
        try:
            original_url, resp = await resp_f
            async with resp:
                data = await resp.read()
        except ...
    # ....

which is more flexible

@asvetlov
Copy link
Member

Agree with @popravich

@yrro
Copy link
Author

yrro commented Apr 2, 2016

If the request fails then await resp_f still throws, leaving me with no way to access original_url from within the exception handler.

@lock
Copy link

lock bot commented Oct 29, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 29, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 29, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants