HEX
Server: LiteSpeed
System: Linux php-prod-1.spaceapp.ru 5.15.0-157-generic #167-Ubuntu SMP Wed Sep 17 21:35:53 UTC 2025 x86_64
User: sport3497 (1034)
PHP: 8.1.33
Disabled: NONE
Upload Files
File: //usr/local/lib/python3.10/dist-packages/pip/_vendor/pygments/__pycache__/lexer.cpython-310.pyc
o

��h��@s�dZddlZddlZddlZddlmZmZddlmZddl	m
Z
mZmZm
Z
mZddlmZmZmZmZmZmZddlmZgd�Ze�d	�Zgd
�Zedd��ZGd
d�de�ZGdd�ded�Z Gdd�de �Z!Gdd�de"�Z#Gdd�d�Z$e$�Z%Gdd�de&�Z'Gdd�d�Z(dd�Z)Gdd�d�Z*e*�Z+d d!�Z,Gd"d#�d#�Z-Gd$d%�d%e�Z.Gd&d'�d'e�Z/Gd(d)�d)e e/d�Z0Gd*d+�d+�Z1Gd,d-�d-e0�Z2d.d/�Z3Gd0d1�d1e/�Z4Gd2d3�d3e0e4d�Z5dS)4z�
    pygments.lexer
    ~~~~~~~~~~~~~~

    Base lexer classes.

    :copyright: Copyright 2006-2025 by the Pygments team, see AUTHORS.
    :license: BSD, see LICENSE for details.
�N)�
apply_filters�Filter)�get_filter_by_name)�Error�Text�Other�
Whitespace�
_TokenType)�get_bool_opt�get_int_opt�get_list_opt�make_analysator�Future�guess_decode)�	regex_opt)
�Lexer�
RegexLexer�ExtendedRegexLexer�DelegatingLexer�LexerContext�include�inherit�bygroups�using�this�default�words�line_rez.*?
))s�utf-8)s��zutf-32)s��zutf-32be)s��zutf-16)s��zutf-16becC�dS)N����xr!r!�E/usr/local/lib/python3.10/dist-packages/pip/_vendor/pygments/lexer.py�<lambda>"�r%c@�eZdZdZdd�ZdS)�	LexerMetaz�
    This metaclass automagically converts ``analyse_text`` methods into
    static methods which always return float values.
    cCs(d|vrt|d�|d<t�||||�S)N�analyse_text)r
�type�__new__)�mcs�name�bases�dr!r!r$r++szLexerMeta.__new__N)�__name__�
__module__�__qualname__�__doc__r+r!r!r!r$r(%sr(c@sneZdZdZdZgZgZgZgZdZ	dZ
dZdZdd�Z
dd�Zdd	�Zd
d�Zdd
�Zddd�Zdd�ZdS)ra"
    Lexer for a specific language.

    See also :doc:`lexerdevelopment`, a high-level guide to writing
    lexers.

    Lexer classes have attributes used for choosing the most appropriate
    lexer based on various criteria.

    .. autoattribute:: name
       :no-value:
    .. autoattribute:: aliases
       :no-value:
    .. autoattribute:: filenames
       :no-value:
    .. autoattribute:: alias_filenames
    .. autoattribute:: mimetypes
       :no-value:
    .. autoattribute:: priority

    Lexers included in Pygments should have two additional attributes:

    .. autoattribute:: url
       :no-value:
    .. autoattribute:: version_added
       :no-value:

    Lexers included in Pygments may have additional attributes:

    .. autoattribute:: _example
       :no-value:

    You can pass options to the constructor. The basic options recognized
    by all lexers and processed by the base `Lexer` class are:

    ``stripnl``
        Strip leading and trailing newlines from the input (default: True).
    ``stripall``
        Strip all leading and trailing whitespace from the input
        (default: False).
    ``ensurenl``
        Make sure that the input ends with a newline (default: True).  This
        is required for some lexers that consume input linewise.

        .. versionadded:: 1.3

    ``tabsize``
        If given and greater than 0, expand tabs in the input (default: 0).
    ``encoding``
        If given, must be an encoding name. This encoding will be used to
        convert the input string to Unicode, if it is not already a Unicode
        string (default: ``'guess'``, which uses a simple UTF-8 / Locale /
        Latin1 detection.  Can also be ``'chardet'`` to use the chardet
        library, if it is installed.
    ``inencoding``
        Overrides the ``encoding`` if given.
    NrcKs�||_t|dd�|_t|dd�|_t|dd�|_t|dd�|_|�dd	�|_|�d
�p-|j|_g|_	t
|dd�D]}|�|�q8d
S)a�
        This constructor takes arbitrary options as keyword arguments.
        Every subclass must first process its own options and then call
        the `Lexer` constructor, since it processes the basic
        options like `stripnl`.

        An example looks like this:

        .. sourcecode:: python

           def __init__(self, **options):
               self.compress = options.get('compress', '')
               Lexer.__init__(self, **options)

        As these options must all be specifiable as strings (due to the
        command line usage), there are various utility functions
        available to help with that, see `Utilities`_.
        �stripnlT�stripallF�ensurenl�tabsizer�encoding�guess�
inencoding�filtersr!N)�optionsr
r4r5r6rr7�getr8r;r�
add_filter)�selfr<�filter_r!r!r$�__init__�s�zLexer.__init__cCs.|jrd|jj�d|j�d�Sd|jj�d�S)Nz<pygments.lexers.z with �>)r<�	__class__r0�r?r!r!r$�__repr__�szLexer.__repr__cKs*t|t�s
t|fi|��}|j�|�dS)z8
        Add a new stream filter to this lexer.
        N)�
isinstancerrr;�append)r?r@r<r!r!r$r>�s
zLexer.add_filtercCr)a�
        A static method which is called for lexer guessing.

        It should analyse the text and return a float in the range
        from ``0.0`` to ``1.0``.  If it returns ``0.0``, the lexer
        will not be selected as the most probable one, if it returns
        ``1.0``, it will be selected immediately.  This is used by
        `guess_lexer`.

        The `LexerMeta` metaclass automatically wraps this function so
        that it works like a static method (no ``self`` or ``cls``
        parameter) and the return value is automatically converted to
        `float`. If the return value is an object that is boolean `False`
        it's the same as if the return values was ``0.0``.
        Nr!)�textr!r!r$r)�r&zLexer.analyse_textc
Cst|t�s@|jdkrt|�\}}n<|jdkr,ztd��ty+}ztd�|�d}~ww|�|j�}|�d
�r?|td
�d�}n
|�d
�rM|td
�d�}|�dd�}|�d
d�}|j
ra|��}n|jri|�d�}|jdkrt|�|j�}|jr�|�d�s�|d7}|S)zVApply preprocessing such as decoding the input, removing BOM and normalizing newlines.r9�chardetzchardet is not vendored by pipzkTo enable chardet encoding guessing, please install the chardet library from http://chardet.feedparser.org/N�replaceir8ruz
�
�
r)rF�strr8r�ImportError�
_encoding_map�
startswith�len�decoderI�detectr=rJr5�stripr4r7�
expandtabsr6�endswith)r?rH�_�e�decoded�bomr8�encr!r!r$�_preprocess_lexer_input�s:


���
�



zLexer._preprocess_lexer_inputFcs4�������fdd�}|�}|st|�j��}|S)ae
        This method is the basic interface of a lexer. It is called by
        the `highlight()` function. It must process the text and return an
        iterable of ``(tokentype, value)`` pairs from `text`.

        Normally, you don't need to override this method. The default
        implementation processes the options recognized by all lexers
        (`stripnl`, `stripall` and so on), and then yields all tokens
        from `get_tokens_unprocessed()`, with the ``index`` dropped.

        If `unfiltered` is set to `True`, the filtering mechanism is
        bypassed even if filters are defined.
        c3s&�����D]
\}}}||fVqdS�N)�get_tokens_unprocessed)rW�t�v�r?rHr!r$�streamers��z"Lexer.get_tokens.<locals>.streamer)r\rr;)r?rH�
unfilteredrb�streamr!rar$�
get_tokens�s
zLexer.get_tokenscCst�)aS
        This method should process the text and return an iterable of
        ``(index, tokentype, value)`` tuples where ``index`` is the starting
        position of the token within the input text.

        It must be overridden by subclasses. It is recommended to
        implement it as a generator to maximize effectiveness.
        )�NotImplementedErrorrar!r!r$r^s	zLexer.get_tokens_unprocessed)F)r0r1r2r3r-�aliases�	filenames�alias_filenames�	mimetypes�priority�url�
version_added�_examplerArEr>r)r\rer^r!r!r!r$r1s$;
1r)�	metaclassc@s$eZdZdZefdd�Zdd�ZdS)ra 
    This lexer takes two lexer as arguments. A root lexer and
    a language lexer. First everything is scanned using the language
    lexer, afterwards all ``Other`` tokens are lexed using the root
    lexer.

    The lexers from the ``template`` lexer package use this base lexer.
    cKs<|di|��|_|di|��|_||_tj|fi|��dS�Nr!)�
root_lexer�language_lexer�needlerrA)r?�_root_lexer�_language_lexer�_needler<r!r!r$rA-szDelegatingLexer.__init__cCs�d}g}g}|j�|�D]$\}}}||jur(|r#|�t|�|f�g}||7}q|�|||f�q|r<|�t|�|f�t||j�|��S)N�)rrr^rsrGrQ�
do_insertionsrq)r?rH�buffered�
insertions�
lng_buffer�ir_r`r!r!r$r^3s


�z&DelegatingLexer.get_tokens_unprocessedN)r0r1r2r3rrAr^r!r!r!r$r#s	rc@�eZdZdZdS)rzI
    Indicates that a state should include rules from another state.
    N�r0r1r2r3r!r!r!r$rJsrc@r')�_inheritzC
    Indicates the a state should inherit from its superclass.
    cCr)Nrr!rDr!r!r$rEU�z_inherit.__repr__N)r0r1r2r3rEr!r!r!r$rQsrc@s eZdZdZdd�Zdd�ZdS)�combinedz:
    Indicates a state combined from multiple states.
    cGst�||�Sr])�tupler+)�cls�argsr!r!r$r+`szcombined.__new__cGsdSr]r!)r?r�r!r!r$rAcszcombined.__init__N)r0r1r2r3r+rAr!r!r!r$r�[sr�c@sFeZdZdZdd�Zddd�Zddd�Zdd	d
�Zdd�Zd
d�Z	dS)�_PseudoMatchz:
    A pseudo match object constructed from a string.
    cCs||_||_dSr])�_text�_start)r?�startrHr!r!r$rAms
z_PseudoMatch.__init__NcCs|jSr])r��r?�argr!r!r$r�qsz_PseudoMatch.startcCs|jt|j�Sr])r�rQr�r�r!r!r$�endtsz_PseudoMatch.endcCs|rtd��|jS)Nz
No such group)�
IndexErrorr�r�r!r!r$�groupwsz_PseudoMatch.groupcCs|jfSr])r�rDr!r!r$�groups|sz_PseudoMatch.groupscCsiSr]r!rDr!r!r$�	groupdictr�z_PseudoMatch.groupdictr])
r0r1r2r3rAr�r�r�r�r�r!r!r!r$r�hs


r�csd�fdd�	}|S)zL
    Callback that yields multiple actions for each group in the match.
    Nc3s��t��D]O\}}|durqt|�tur)|�|d�}|r(|�|d�||fVq|�|d�}|durT|r>|�|d�|_||t|�|d�|�|�D]}|rS|VqLq|r^|��|_dSdS)N�)�	enumerater*r	r�r��posr�r�)�lexer�match�ctxr|�action�data�item�r�r!r$�callback�s,�����zbygroups.<locals>.callbackr]r!)r�r�r!r�r$r�src@r})�_ThiszX
    Special singleton used for indicating the caller class.
    Used by ``using``.
    Nr~r!r!r!r$r��sr�csli�d�vr��d�}t|ttf�r|�d<nd|f�d<�tur+d��fdd�	}|Sd���fdd�	}|S)	a�
    Callback that processes the match with a different lexer.

    The keyword arguments are forwarded to the lexer, except `state` which
    is handled separately.

    `state` specifies the state that the new lexer will start in, and can
    be an enumerable such as ('root', 'inline', 'string') or a simple
    string which is assumed to be on top of the root state.

    Note: For that to work, `_other` must not be an `ExtendedRegexLexer`.
    �state�stack�rootNc3sx��r��|j�|jdi���}n|}|��}|j|��fi���D]
\}}}||||fVq#|r:|��|_dSdSrp)�updater<rCr�r^r�r�r��r�r�r��lx�sr|r_r`)�	gt_kwargs�kwargsr!r$r��s� �zusing.<locals>.callbackc3sl���|j��di���}|��}|j|��fi���D]
\}}}||||fVq|r4|��|_dSdSrp)r�r<r�r^r�r�r�r���_otherr�r�r!r$r��s� �r])�poprF�listr�r)r�r�r�r�r!r�r$r�s


�
rc@r')rz�
    Indicates a state or state action (e.g. #pop) to apply.
    For example default('#pop') is equivalent to ('', Token, '#pop')
    Note that state tuples may be used as well.

    .. versionadded:: 2.0
    cCs
||_dSr])r�)r?r�r!r!r$rA�s
zdefault.__init__N)r0r1r2r3rAr!r!r!r$r�src@s"eZdZdZddd�Zdd�ZdS)	rz�
    Indicates a list of literal words that is transformed into an optimized
    regex that matches any of the words.

    .. versionadded:: 2.0
    rwcCs||_||_||_dSr])r�prefix�suffix)r?rr�r�r!r!r$rA�s
zwords.__init__cCst|j|j|jd�S)N�r�r�)rrr�r�rDr!r!r$r=�sz	words.getN)rwrw)r0r1r2r3rAr=r!r!r!r$r�s
rc@sJeZdZdZdd�Zdd�Zdd�Zdd	�Zddd�Zd
d�Z	dd�Z
d
S)�RegexLexerMetazw
    Metaclass for RegexLexer, creates the self._tokens attribute from
    self.tokens on the first instantiation.
    cCs t|t�r	|��}t�||�jS)zBPreprocess the regular expression component of a token definition.)rFrr=�re�compiler�)r��regex�rflagsr�r!r!r$�_process_regex�s
zRegexLexerMeta._process_regexcCs&t|�tust|�sJd|����|S)z5Preprocess the token component of a token definition.z0token type must be simple type or callable, not )r*r	�callable)r��tokenr!r!r$�_process_tokens�zRegexLexerMeta._process_tokencCs
t|t�r0|dkrdS||vr|fS|dkr|S|dd�dkr)t|dd��SJd|����t|t�rdd	|j}|jd
7_g}|D]}||ksRJd|����|�|�|||��qE|||<|fSt|t�r~|D]}||vs{|dvs{Jd|��qk|SJd
|����)z=Preprocess the state transition action of a token definition.�#pop����#pushN�z#pop:Fzunknown new state z_tmp_%dr�zcircular state ref )r�r�zunknown new state def )rFrM�intr��_tmpname�extend�_process_stater�)r��	new_state�unprocessed�	processed�	tmp_state�itokens�istater!r!r$�_process_new_states<



�

��z!RegexLexerMeta._process_new_statec
Cs�t|t�sJd|����|ddksJd|����||vr!||Sg}||<|j}||D]�}t|t�rM||ks@Jd|����|�|�||t|���q.t|t�rSq.t|t�rm|�|j	||�}|�
t�d�j
d|f�q.t|�tuszJd|����z|�|d||�}Wnty�}	ztd	|d�d
|�d|�d|	���|	�d}	~	ww|�|d
�}
t|�dkr�d}n	|�|d||�}|�
||
|f�q.|S)z%Preprocess a single state definition.zwrong state name r�#zinvalid state name zcircular state reference rwNzwrong rule def zuncompilable regex z
 in state z of z: r��)rFrM�flagsrr�r�rrr�r�rGr�r�r�r*r�r��	Exception�
ValueErrorr�rQ)r�r�r�r��tokensr��tdefr��rex�errr�r!r!r$r�)sD
�

&��
�zRegexLexerMeta._process_stateNcCs<i}|j|<|p
|j|}t|�D]	}|�|||�q|S)z-Preprocess a dictionary of token definitions.)�_all_tokensr�r�r�)r�r-�	tokendefsr�r�r!r!r$�process_tokendefTs
zRegexLexerMeta.process_tokendefc

Cs�i}i}|jD]_}|j�di�}|��D]Q\}}|�|�}|dur;|||<z|�t�}Wn	ty5Yqw|||<q|�|d�}|durFq||||d�<z|�t�}	Wn	ty^Yqw||	||<qq|S)a
        Merge tokens from superclasses in MRO order, returning a single tokendef
        dictionary.

        Any state that is not defined by a subclass will be inherited
        automatically.  States that *are* defined by subclasses will, by
        default, override that state in the superclass.  If a subclass wishes to
        inherit definitions from a superclass, it can use the special value
        "inherit", which will cause the superclass' state definition to be
        included at that point in the state.
        r�Nr�)�__mro__�__dict__r=�items�indexrr�r�)
r�r��inheritable�c�toksr�r��curitems�inherit_ndx�new_inh_ndxr!r!r$�
get_tokendefs\s6

���zRegexLexerMeta.get_tokendefscOsRd|jvri|_d|_t|d�r|jrn	|�d|���|_tj	|g|�Ri|��S)z:Instantiate cls after preprocessing its token definitions.�_tokensr�token_variantsrw)
r�r�r��hasattrr�r�r�r�r*�__call__)r�r��kwdsr!r!r$r��s
zRegexLexerMeta.__call__r])r0r1r2r3r�r�r�r�r�r�r�r!r!r!r$r��s#
+1r�c@s$eZdZdZejZiZddd�ZdS)rz�
    Base for simple stateful regular expression-based lexers.
    Simplifies the lexing process so that you need only
    provide a list of states and regular expressions.
    �r�ccs��d}|j}t|�}||d}	|D]�\}}}	|||�}
|
r�|dur:t|�tur2|||
��fVn|||
�EdH|
��}|	dur�t|	t�rm|	D]"}|dkrZt|�dkrY|�	�qI|dkrf|�
|d�qI|�
|�qIn,t|	t�r�t|	�t|�kr�|dd�=n||	d�=n|	dkr�|�
|d�nJd|	����||d}n3qz'||d	kr�d
g}|d
}|t
d	fV|d7}Wq|t||fV|d7}Wn
ty�YdSwq)z~
        Split ``text`` into (tokentype, text) pairs.

        ``stack`` is the initial stack (default: ``['root']``)
        rr�r�Nr�r�F�wrong state def: rKr�)r�r�r*r	r�r�rFr�rQr�rGr��absrrr�)r?rHr�r�r��
statestack�statetokens�rexmatchr�r��mr�r!r!r$r^�s`�

��
�#��z!RegexLexer.get_tokens_unprocessedN�r�)	r0r1r2r3r��	MULTILINEr�r�r^r!r!r!r$r�s
rc@s"eZdZdZddd�Zdd�ZdS)rz9
    A helper object that holds lexer position data.
    NcCs*||_||_|pt|�|_|pdg|_dS)Nr�)rHr�rQr�r�)r?rHr�r�r�r!r!r$rAszLexerContext.__init__cCsd|j�d|j�d|j�d�S)Nz
LexerContext(z, �))rHr�r�rDr!r!r$rEszLexerContext.__repr__�NN)r0r1r2r3rArEr!r!r!r$r�s
rc@seZdZdZddd�ZdS)rzE
    A RegexLexer that uses a context object to store its state.
    Nccs �|j}|st|d�}|d}n|}||jd}|j}	|D]�\}}}|||j|j�}	|	r�|durYt|�turG|j||	��fV|	��|_n|||	|�EdH|sY||jd}|dur�t	|t
�r�|D]'}
|
dkrwt|j�dkrv|j��qd|
dkr�|j�
|jd�qd|j�
|
�qdn1t	|t�r�t|�t|j�kr�|jdd�=n|j|d�=n|dkr�|j�
|jd�nJd	|����||jd}nHqz;|j|jkr�WdS||jd
kr�dg|_|d}|jtd
fV|jd7_Wq|jt||jfV|jd7_Wnt�yYdSwq)z
        Split ``text`` into (tokentype, text) pairs.
        If ``context`` is given, use this lexer context instead.
        rr�r�r�Nr�r�Fr�rK)r�rr�rHr�r�r*r	r�rFr�rQr�rGr�r�rrr�)r?rH�contextr�r�r�r�r�r�r�r�r!r!r$r^sn�



��
�#��z)ExtendedRegexLexer.get_tokens_unprocessedr�)r0r1r2r3r^r!r!r!r$rsrc	cs��t|�}zt|�\}}Wnty|EdHYdSwd}d}|D]{\}}}|dur.|}d}	|r�|t|�|kr�||	||�}
|
rP|||
fV|t|
�7}|D]\}}}
|||
fV|t|
�7}qR||}	zt|�\}}Wnty{d}Ynw|r�|t|�|ks:|	t|�kr�||||	d�fV|t|�|	7}q#|r�|p�d}|D]\}}}|||fV|t|�7}q�zt|�\}}Wnty�d}YdSw|s�dSdS)ag
    Helper for lexers which must combine the results of several
    sublexers.

    ``insertions`` is a list of ``(index, itokens)`` pairs.
    Each ``itokens`` iterable should be inserted at position
    ``index`` into the token stream given by the ``tokens``
    argument.

    The result is a combined token stream.

    TODO: clean up the code here.
    NTrF)�iter�next�
StopIterationrQ)rzr�r�r��realpos�insleftr|r_r`�oldi�tmpval�it_index�it_token�it_value�pr!r!r$rxSs\�
������rxc@r')�ProfilingRegexLexerMetaz>Metaclass for ProfilingRegexLexer, collects regex timing info.csLt|t�rt|j|j|jd��n|�t��|��tjf����fdd�	}|S)Nr�cs`�jd���fddg�}t��}��|||�}t��}|dd7<|d||7<|S)Nr�rr r�)�
_prof_data�
setdefault�timer�)rHr��endpos�info�t0�res�t1�r��compiledr�r�r!r$�
match_func�sz:ProfilingRegexLexerMeta._process_regex.<locals>.match_func)	rFrrr�r�r�r��sys�maxsize)r�r�r�r�rr!rr$r��s

�z&ProfilingRegexLexerMeta._process_regexN)r0r1r2r3r�r!r!r!r$r��sr�c@s"eZdZdZgZdZddd�ZdS)�ProfilingRegexLexerzFDrop-in replacement for RegexLexer that does profiling of its regexes.�r�c#s���jj�i�t��||�EdH�jj��}tdd�|��D��fdd�dd�}tdd�|D��}t	�t	d�jj
t|�|f�t	d	�t	d
d�t	d�|D]}t	d
|�qSt	d	�dS)NcssP�|]#\\}}\}}|t|��d��dd�dd�|d|d||fVqdS)zu'z\\�\N�Ai�)�reprrTrJ)�.0r��r�nr_r!r!r$�	<genexpr>�s���z=ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<genexpr>cs
|�jSr])�_prof_sort_indexr"rDr!r$r%�s
z<ProfilingRegexLexer.get_tokens_unprocessed.<locals>.<lambda>T)�key�reversecss�|]}|dVqdS)�Nr!)r
r#r!r!r$r
�s�z2Profiling result for %s lexing %d chars in %.3f mszn==============================================================================================================z$%-20s %-64s ncalls  tottime  percall)r�r�zn--------------------------------------------------------------------------------------------------------------z%-20s %-65s %5d %8.4f %8.4f)rCr�rGrr^r��sortedr��sum�printr0rQ)r?rHr��rawdatar��	sum_totalr/r!rDr$r^�s*��
��z*ProfilingRegexLexer.get_tokens_unprocessedNr�)r0r1r2r3r�rr^r!r!r!r$r�s
r)6r3r�rr��pip._vendor.pygments.filterrr�pip._vendor.pygments.filtersr�pip._vendor.pygments.tokenrrrrr	�pip._vendor.pygments.utilr
rrr
rr�pip._vendor.pygments.regexoptr�__all__r�rrO�staticmethod�_default_analyser*r(rrrMrrrr�r�r�rr�rrrrr�rrrrxr�rr!r!r!r$�<module>sH
 
s'
2(aH@