<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="en"><front><journal-meta><journal-id journal-id-type="publisher-id">bricslawjournal</journal-id><journal-title-group><journal-title xml:lang="en">BRICS Law Journal</journal-title><trans-title-group xml:lang="ru"><trans-title>Юридический журнал БРИКС</trans-title></trans-title-group></journal-title-group><issn pub-type="ppub">2409-9058</issn><issn pub-type="epub">2412-2343</issn><publisher><publisher-name>Publishing House V.Ема</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.21684/2412-2343-2026-13-1-8-16</article-id><article-id custom-type="elpub" pub-id-type="custom">bricslawjournal-1569</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>OPINION</subject></subj-group></article-categories><title-group><article-title>Governing the Inevitable: Legal Priorities for the Development of Political Institutions in the Age of Artificial Intelligence</article-title><trans-title-group xml:lang="ru"><trans-title></trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><name-alternatives><name name-style="western" xml:lang="en"><surname>Kabyshev</surname><given-names>E.</given-names></name></name-alternatives><bio xml:lang="en"><p>Sergey Kabyshev – Associate Professor; Chairman, Committee on Science and Higher Education of the State Duma of the Federal Assembly of the Russian Federation; Professor, Department of Constitutional and Municipal Law</p><p>9 Sadovaya-Kudrinskaya St., Moscow, 125993</p></bio><email xlink:type="simple">usvkabyshev@mail.ru</email><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="en">Kutafin Moscow State Law University<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2026</year></pub-date><pub-date pub-type="epub"><day>08</day><month>04</month><year>2026</year></pub-date><volume>13</volume><issue>1</issue><fpage>8</fpage><lpage>16</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Kabyshev E., 2026</copyright-statement><copyright-year>2026</copyright-year><copyright-holder xml:lang="ru">Kabyshev E.</copyright-holder><copyright-holder xml:lang="en">Kabyshev E.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://www.bricslawjournal.com/jour/article/view/1569">https://www.bricslawjournal.com/jour/article/view/1569</self-uri><abstract><p>This article examines systemic constitutional challenges arising from the diffusion of artificial intelligence (AI) into the political sphere. It argues that AI is reshaping democratic institutions by generating risks of substituting public deliberation with opaque algorithmic processes, fostering algorithmic discrimination, enabling information micro-manipulation, and concentrating power in the hands of technology holders. Particular attention is devoted to the threat posed to the socio-humanistic paradigm in the context of a normative choice between classical humanism and transhumanism. As a response to these challenges, the article proposes a framework of eight legal principles for AI regulation, including state-level strategic governance, the  “human-in-the-loop” principle, anthropological primacy, digital equality, and managed transparency. Within the electoral context, the analysis highlights specific risks such as microtargeting, “dark advertising,” deepfakes, and automated bots, which undermine electoral integrity and facilitate manipulation of voters’ will. The article concludes that ensuring the sovereignty and legitimacy of political institutions in the digital age requires the development of national AI models and robust legal regulation, including mandatory algorithmic audits and the prohibition of manipulative technologies.</p></abstract><kwd-group xml:lang="en"><kwd>artificial intelligence</kwd><kwd>constitutional challenges</kwd><kwd>political institutions</kwd><kwd>AI legal regulation</kwd><kwd>elections and manipulation</kwd><kwd>socio-humanistic paradigm</kwd></kwd-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Hackenburg, K., et al. (2025). The levers of political persuasion with conversational artificial intelligence. Science, 390(6777). https://doi.org/10.1126/science.aea3884</mixed-citation><mixed-citation xml:lang="en">Hackenburg, K., et al. (2025). The levers of political persuasion with conversational artificial intelligence. Science, 390(6777). https://doi.org/10.1126/science.aea3884</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Lin, H., et al. (2025). Persuading voters using human–artificial intelligence dialogues. Nature, 648, 394–401. https://doi.org/10.1038/s41586-025-09771-9</mixed-citation><mixed-citation xml:lang="en">Lin, H., et al. (2025). Persuading voters using human–artificial intelligence dialogues. Nature, 648, 394–401. https://doi.org/10.1038/s41586-025-09771-9</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Münker, S. (2025). Cultural bias in large language models: Evaluating AI agents through moral questionnaires. arXiv. https://arxiv.org/html/2507.10073v1</mixed-citation><mixed-citation xml:lang="en">Münker, S. (2025). Cultural bias in large language models: Evaluating AI agents through moral questionnaires. arXiv. https://arxiv.org/html/2507.10073v1</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Peters, U., &amp; Carman, M. (2024). Cultural bias in explainable AI research: A systematic analysis. Journal of Artificial Intelligence Research, 79, 971–1000. https://doi.org/10.1613/jair.1.14888</mixed-citation><mixed-citation xml:lang="en">Peters, U., &amp; Carman, M. (2024). Cultural bias in explainable AI research: A systematic analysis. Journal of Artificial Intelligence Research, 79, 971–1000. https://doi.org/10.1613/jair.1.14888</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Sukhanov, E. A. (2025). On civil law problems of digitalization. Heraldof Civil Procedure, 1, 37–52. (In Russian). https://doi.org/10.24031/2226-0781-2025-15-1-37-52</mixed-citation><mixed-citation xml:lang="en">Sukhanov, E. A. (2025). On civil law problems of digitalization. Heraldof Civil Procedure, 1, 37–52. (In Russian). https://doi.org/10.24031/2226-0781-2025-15-1-37-52</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Tao, Y., Viberg, O., Baker, R. S., &amp; Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), 346. https://doi.org/10.1093/pnasnexus/pgae346</mixed-citation><mixed-citation xml:lang="en">Tao, Y., Viberg, O., Baker, R. S., &amp; Kizilcec, R. F. (2024). Cultural bias and cultural alignment of large language models. PNAS Nexus, 3(9), 346. https://doi.org/10.1093/pnasnexus/pgae346</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Ustinovich, E. S. (2024). Generative artificial intelligence in the 2024 electoral processes worldwide: Disinformation campaigns and online trolls. Social Policy and Social Partnership, 3. https://doi.org/10.33920/pol-01-2403-03. (In Russian).</mixed-citation><mixed-citation xml:lang="en">Ustinovich, E. S. (2024). Generative artificial intelligence in the 2024 electoral processes worldwide: Disinformation campaigns and online trolls. Social Policy and Social Partnership, 3. https://doi.org/10.33920/pol-01-2403-03. (In Russian).</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Vasilevskaya, L. Yu. (2025). Delict liability for harm caused by artificial intelligence: Problems and development prospects. Civil Law, 4, 2–5. https://doi.org/10.18572/2070-2140-2025-4-2-5. (In Russian).</mixed-citation><mixed-citation xml:lang="en">Vasilevskaya, L. Yu. (2025). Delict liability for harm caused by artificial intelligence: Problems and development prospects. Civil Law, 4, 2–5. https://doi.org/10.18572/2070-2140-2025-4-2-5. (In Russian).</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Zorkin, V. D. (2024). Lectures on law and the state. Constitutional Court of the Russian Federation. (In Russian).</mixed-citation><mixed-citation xml:lang="en">Zorkin, V. D. (2024). Lectures on law and the state. Constitutional Court of the Russian Federation. (In Russian).</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
