Closed
Description
Python already parses URLs, and does it correctly:
>>> from urllib.parse import urlparse
>>> urlparse('https://google.com')
ParseResult(scheme='https', netloc='google.com', path='', params='', query='', fragment='')
>>> urlparse('gopher://gopher.waynewerner.com')
ParseResult(scheme='gopher', netloc='gopher.waynewerner.com', path='', params='', query='', fragment='')
>>> urlparse('tel://555-555-5555')
ParseResult(scheme='tel', netloc='555-555-5555', path='', params='', query='', fragment='')
>>> urlparse('file:///path-to-some-file')
ParseResult(scheme='file', netloc='', path='/path-to-some-file', params='', query='', fragment='')
>>> urlparse('missing-scheme.com')
ParseResult(scheme='', netloc='', path='missing-scheme.com', params='', query='', fragment='')
I had to chase down this library because click-params uses validators to validate URLs, but totally valid URLs aren't parsed correctly because the scheme wasn't expected by this library 😞