HEX
Server: LiteSpeed
System: Linux php-prod-1.spaceapp.ru 5.15.0-157-generic #167-Ubuntu SMP Wed Sep 17 21:35:53 UTC 2025 x86_64
User: xnsbb3110 (1041)
PHP: 8.1.33
Disabled: NONE
Upload Files
File: //usr/local/CyberCP/lib/python3.10/site-packages/tldextract/__pycache__/tldextract.cpython-310.pyc
o

�h+I�@s,dZddlmZddlZddlZddlZddlmZm	Z	ddl
mZddlm
Z
ddlZddlZddlmZmZdd	lmZmZmZdd
lmZe�d�Zej�d�Zd
Zedd�Gdd�d��ZGdd�d�Z e �Z!Gdd�d�Z"e
e!j#�		d'd(dd��Z$e
e!j%�d d!��Z%Gd"d#�d#�Z&d)d%d&�Z'dS)*aZ`tldextract` accurately separates a URL's subdomain, domain, and public suffix.

It does this via the Public Suffix List (PSL).

    >>> import tldextract

    >>> tldextract.extract('http://forums.news.cnn.com/')
    ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False)

    >>> tldextract.extract('http://forums.bbc.co.uk/') # United Kingdom
    ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False)

    >>> tldextract.extract('http://www.worldbank.org.kg/') # Kyrgyzstan
    ExtractResult(subdomain='www', domain='worldbank', suffix='org.kg', is_private=False)

Note subdomain and suffix are _optional_. Not all URL-like inputs have a
subdomain or a valid suffix.

    >>> tldextract.extract('google.com')
    ExtractResult(subdomain='', domain='google', suffix='com', is_private=False)

    >>> tldextract.extract('google.notavalidsuffix')
    ExtractResult(subdomain='google', domain='notavalidsuffix', suffix='', is_private=False)

    >>> tldextract.extract('http://127.0.0.1:8080/deployed/')
    ExtractResult(subdomain='', domain='127.0.0.1', suffix='', is_private=False)

To rejoin the original hostname, if it was indeed a valid, registered hostname:

    >>> ext = tldextract.extract('http://forums.bbc.co.uk')
    >>> ext.registered_domain
    'bbc.co.uk'
    >>> ext.fqdn
    'forums.bbc.co.uk'
�)�annotationsN)�
Collection�Sequence)�	dataclass)�wraps�)�	DiskCache�
get_cache_dir)�lenient_netloc�
looks_like_ip�looks_like_ipv6)�get_suffix_lists�
tldextract�TLDEXTRACT_CACHE_TIMEOUT)z4https://publicsuffix.org/list/public_suffix_list.datzQhttps://raw.githubusercontent.com/publicsuffix/list/master/public_suffix_list.datT)�orderc@sjeZdZUdZded<ded<ded<ded<edd	d
��Zeddd��Zedd
d��Zeddd��Z	dS)�
ExtractResultz�A URL's extracted subdomain, domain, and suffix.

    Also contains metadata, like a flag that indicates if the URL has a private suffix.
    �str�	subdomain�domain�suffix�bool�
is_private�returncCs"|jr|jr|j�d|j��SdS)z�Joins the domain and suffix fields with a dot, if they're both set.

        >>> extract('http://forums.bbc.co.uk').registered_domain
        'bbc.co.uk'
        >>> extract('http://localhost:8080').registered_domain
        ''
        �.�)rr��self�r�H/usr/local/CyberCP/lib/python3.10/site-packages/tldextract/tldextract.py�registered_domainLs	zExtractResult.registered_domaincCs6|jr|js	|jrd�dd�|j|j|jfD��SdS)z�Returns a Fully Qualified Domain Name, if there is a proper domain/suffix.

        >>> extract('http://forums.bbc.co.uk/path/to/file').fqdn
        'forums.bbc.co.uk'
        >>> extract('http://localhost:8080').fqdn
        ''
        rcss�|]}|r|VqdS�Nr)�.0�irrr�	<genexpr>cs�z%ExtractResult.fqdn.<locals>.<genexpr>r)rrr�joinrrrrr�fqdnYs	 zExtractResult.fqdncCs&|jr|js|jst|j�r|jSdS)aReturns the ipv4 if that is what the presented domain/url is.

        >>> extract('http://127.0.0.1/path/to/file').ipv4
        '127.0.0.1'
        >>> extract('http://127.0.0.1.1/path/to/file').ipv4
        ''
        >>> extract('http://256.1.1.1').ipv4
        ''
        r)rrrrrrrr�ipv4fs����zExtractResult.ipv4cCsXd}t|j�|kr*|jddkr*|jddkr*|js*|js*|jdd�}t|�r*|SdS)a�Returns the ipv6 if that is what the presented domain/url is.

        >>> extract('http://[aBcD:ef01:2345:6789:aBcD:ef01:127.0.0.1]/path/to/file').ipv6
        'aBcD:ef01:2345:6789:aBcD:ef01:127.0.0.1'
        >>> extract('http://[aBcD:ef01:2345:6789:aBcD:ef01:127.0.0.1.1]/path/to/file').ipv6
        ''
        >>> extract('http://[aBcD:ef01:2345:6789:aBcD:ef01:256.0.0.1]').ipv6
        ''
        �r�[����]rr)�lenrrrr)r�min_num_ipv6_chars�debracketedrrr�ipv6ys��zExtractResult.ipv6N)rr)
�__name__�
__module__�__qualname__�__doc__�__annotations__�propertyrr%r&r.rrrrr@s
rc@s�eZdZdZe�edddefd-dd�Z		d.d/dd�Z		d.d/dd�Z			d.d0dd �Z
	d1d2d"d#�Z	d3d4d%d&�Ze
d1d5d(d)��Z	d1d6d+d,�ZdS)7�
TLDExtractzOA callable for extracting, subdomain, domain, and suffix components from a URL.TFr�	cache_dir�
str | None�suffix_list_urls�
Sequence[str]�fallback_to_snapshotr�include_psl_private_domains�extra_suffixes�cache_fetch_timeout�str | float | Noner�NonecCsr|pd}tdd�|D��|_||_|js|s|jstd��||_||_d|_t|t�r/t	|�n||_
t|�|_dS)a�Construct a callable for extracting subdomain, domain, and suffix components from a URL.

        Upon calling it, it first checks for a JSON in `cache_dir`. By default,
        the `cache_dir` will live in the tldextract directory. You can disable
        the caching functionality of this module by setting `cache_dir` to `None`.

        If the cached version does not exist (such as on the first run), HTTP request the URLs in
        `suffix_list_urls` in order, until one returns public suffix list data. To disable HTTP
        requests, set this to an empty sequence.

        The default list of URLs point to the latest version of the Mozilla Public Suffix List and
        its mirror, but any similar document could be specified. Local files can be specified by
        using the `file://` protocol. (See `urllib2` documentation.)

        If there is no cached version loaded and no data is found from the `suffix_list_urls`,
        the module will fall back to the included TLD set snapshot. If you do not want
        this behavior, you may set `fallback_to_snapshot` to False, and an exception will be
        raised instead.

        The Public Suffix List includes a list of "private domains" as TLDs,
        such as blogspot.com. These do not fit `tldextract`'s definition of a
        suffix, so these domains are excluded by default. If you'd like them
        included instead, set `include_psl_private_domains` to True.

        You can pass additional suffixes in `extra_suffixes` argument without changing list URL

        cache_fetch_timeout is passed unmodified to the underlying request object
        per the requests documentation here:
        http://docs.python-requests.org/en/master/user/advanced/#timeouts

        cache_fetch_timeout can also be set to a single value with the
        environment variable TLDEXTRACT_CACHE_TIMEOUT, like so:

        TLDEXTRACT_CACHE_TIMEOUT="1.2"

        When set this way, the same timeout value will be used for both connect
        and read timeouts
        rcss �|]}|��r|��VqdSr )�strip)r!�urlrrrr#�s��
�z&TLDExtract.__init__.<locals>.<genexpr>z�The arguments you have provided disable all ways for tldextract to obtain data. Please provide a suffix list data, a cache_dir, or set `fallback_to_snapshot` to `True`.N)
�tupler8r:�
ValueErrorr;r<�
_extractor�
isinstancer�floatr=r�_cache)rr6r8r:r;r<r=rrr�__init__�s"/
��
��zTLDExtract.__init__NrAr�bool | None�session�requests.Session | NonercCs|j|||d�S)zAlias for `extract_str`.�rJ)�extract_str�rrAr;rJrrr�__call__�szTLDExtract.__call__cCs|jt|�||d�S)a�Take a string URL and splits it into its subdomain, domain, and suffix components.

        I.e. its effective TLD, gTLD, ccTLD, etc. components.

        >>> extractor = TLDExtract()
        >>> extractor.extract_str('http://forums.news.cnn.com/')
        ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False)
        >>> extractor.extract_str('http://forums.bbc.co.uk/')
        ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False)

        Allows configuring the HTTP request via the optional `session`
        parameter. For example, if you need to use a HTTP proxy. See also
        `requests.Session`.

        >>> import requests
        >>> session = requests.Session()
        >>> # customize your session here
        >>> with session:
        ...     extractor.extract_str("http://forums.news.cnn.com/", session=session)
        ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False)
        rL)�_extract_netlocr
rNrrrrM�s
�zTLDExtract.extract_str�3urllib.parse.ParseResult | urllib.parse.SplitResultcCs|j|j||d�S)a�Take the output of urllib.parse URL parsing methods and further splits the parsed URL.

        Splits the parsed URL into its subdomain, domain, and suffix
        components, i.e. its effective TLD, gTLD, ccTLD, etc. components.

        This method is like `extract_str` but faster, as the string's domain
        name has already been parsed.

        >>> extractor = TLDExtract()
        >>> extractor.extract_urllib(urllib.parse.urlsplit('http://forums.news.cnn.com/'))
        ExtractResult(subdomain='forums.news', domain='cnn', suffix='com', is_private=False)
        >>> extractor.extract_urllib(urllib.parse.urlsplit('http://forums.bbc.co.uk/'))
        ExtractResult(subdomain='forums', domain='bbc', suffix='co.uk', is_private=False)
        rL)rP�netlocrNrrr�extract_urllibs�zTLDExtract.extract_urllibrRc
Cs$|�dd��dd��dd�}d}t|�|kr2|ddkr2|dd	kr2t|d
d��r2td|ddd
�S|�d�}|j|d�j||d�\}}d}	|t|�krR|	kr_nnt|�r_td|d|�S|t|�krnd�||d��nd}
|dkrd�|d|d
��nd}|r�||d
nd}t|||
|�S)Nu。ru.u。r'rr(r)r*rrF)rrL)r;�)	�replacer+rr�split�_get_tld_extractor�suffix_indexrr$)
rrRr;rJ�netloc_with_ascii_dotsr,�labelsrXr�num_ipv4_labelsrrrrrrrPs2
�
�
��""zTLDExtract._extract_netloc�	fetch_nowcCs(d|_|j��|r|j|d�dSdS)z/Force fetch the latest suffix list definitions.NrL)rDrG�clearrW)rr\rJrrr�updateBs

�zTLDExtract.update�	list[str]cCst|j|d����S)z�Returns the list of tld's used by default.

        This will vary based on `include_psl_private_domains` and `extra_suffixes`
        rL)�listrW�tlds)rrJrrrraKszTLDExtract.tlds�_PublicSuffixListTLDExtractorcCsb|jr|jSt|j|j|j|j|d�\}}t|||jg�s!td��t	||t
|j�|jd�|_|jS)a1Get or compute this object's TLDExtractor.

        Looks up the TLDExtractor in roughly the following order, based on the
        settings passed to __init__:

        1. Memoized on `self`
        2. Local system _cache file
        3. Remote PSL, over HTTP
        4. Bundled PSL snapshot file
        )�cache�urlsr=r:rJz)No tlds set. Cannot proceed without tlds.)�public_tlds�private_tlds�
extra_tldsr;)rDr
rGr8r=r:�anyr<rCrbr`r;)rrJrerfrrrrWSs$

��zTLDExtract._get_tld_extractor)r6r7r8r9r:rr;rr<r9r=r>rr?)NN�rArr;rIrJrKrr)rArQr;rIrJrKrrr )rRrr;rIrJrKrr�FN)r\rrJrKrr?)rJrKrr_)rJrKrrb)r/r0r1r2r	�PUBLIC_SUFFIX_LIST_URLS�
CACHE_TIMEOUTrHrOrMrSrPr^r4rarWrrrrr5�s4�J��"��'�	�r5c@s@eZdZdZ			dddd�Ze	dddd��Zdddd�ZdS)�Triez:Trie for storing eTLDs with their labels in reverse-order.NF�matches�dict[str, Trie] | None�endrrrr?cCs|r|ni|_||_||_dS)zTODO.N)rnrpr)rrnrprrrrrH}s
z
Trie.__init__�public_suffixes�Collection[str]�private_suffixes�Collection[str] | NonecCs@t�}|D]}|�|�q|durg}|D]}|�|d�q|S)z?Create a Trie from a list of suffixes and return its root node.NT)rm�
add_suffix)rqrs�	root_noderrrr�create�szTrie.createrrcCsP|}|�d�}|��|D]}||jvrt�|j|<|j|}q
d|_||_dS)z+Append a suffix's labels to this Trie node.rTN)rV�reversernrmrpr)rrr�noderZ�labelrrrru�s


zTrie.add_suffix)NFF)rnrorprrrrr?r )rqrrrsrtrrm�F)rrrrrr?)r/r0r1r2rH�staticmethodrwrurrrrrmzs��rmFrArr;rIrJrKrcCst|||d�S)N)r;rJ)�
TLD_EXTRACTOR)rAr;rJrrr�extract�s�r~cOstj|i|��Sr )r}r^)�args�kwargsrrrr^�sr^c@s8eZdZdZ	ddd	d
�Zdddd�Z	dddd�ZdS)rbz8Wrapper around this project's main algo for PSL lookups.Frer_rfrgr;rcCsX||_||_||_t|||�|_t||�|_t�|jt|��|_t�|j�|_	dSr )
r;rerf�	frozenset�tlds_incl_private�tlds_excl_privatermrw�tlds_incl_private_trie�tlds_excl_private_trie)rrerfrgr;rrrrH�s
�z&_PublicSuffixListTLDExtractor.__init__NrIr�frozenset[str]cCs|dur|j}|r|jS|jS)z,Get the currently filtered list of suffixes.N)r;r�r�)rr;rrrra�s���z"_PublicSuffixListTLDExtractor.tlds�spl�tuple[int, bool]c
Cs�|dur|j}|r|jn|j}t|�}|}t|�D]@}t|�}||jvr3|d8}|j|}|jr2|}qd|jv}|rYd||jv}	|	rM||jdjfS|d|jdjfS||jfS)z�Return the index of the first suffix label, and whether it is private.

        Returns len(spl) if no suffix is found.
        Nr�*�!)	r;r�r�r+�reversed�_decode_punycodernrpr)
rr�r;ryr"�jrz�
decoded_label�is_wildcard�is_wildcard_exceptionrrrrX�s0��



z*_PublicSuffixListTLDExtractor.suffix_indexr{)rer_rfr_rgr_r;rr )r;rIrr�)r�r_r;rIrr�)r/r0r1r2rHrarXrrrrrb�s��rbrzc	Cs@|��}|�d�}|rzt�|�WSttfyY|Sw|S)Nzxn--)�lower�
startswith�idna�decode�UnicodeError�
IndexError)rz�lowered�looks_like_punyrrrr�s
�r�rjri)rzrrr)(r2�
__future__r�logging�os�urllib.parse�urllib�collections.abcrr�dataclassesr�	functoolsrr��requestsrcrr	�remoter
rr�suffix_listr
�	getLogger�LOG�environ�getrlrkrr5r}rmrOr~r^rbr�rrrr�<module>s<$
Pg1�

F