The Potential Risks and Improvement Paths of Intelligent Algorithms Assisting Civil Trials under the Perspective of Civil Litigation Intelligence
DOI:
https://doi.org/10.71222/tyx13b21Keywords:
intelligent algorithms, civil trial, potential risks, improvement paths, civil litigationAbstract
In recent years, with the rapid development of intelligent science and technology, intelligent algorithms have emerged in an endless stream. Deep interaction with justice occurs with face recognition algorithms, speech recognition algorithms, natural language processing algorithms, which has broadened the path of the construction of the digital court and the application of intelligent civil litigation scenarios. Intelligent algorithm-assisted civil trial is of great significance in enhancing judicial justice and trial efficiency. However, there are also risks that algorithm failure may mislead civil trial, lack of empirical rationality and moral ethics, alienation of algorithmic power affecting trial fairness, which should be perfected in terms of improving the supervisory mechanism of algorithm-assisted civil trial, constructing a systematic responsibility system for intelligent algorithmic assistance in civil trials, and limiting the application scenarios of intelligent-assisted civil trial in an effort to promote intelligent civil litigation.
References
1. O. Bakiner, "The promises and challenges of addressing artificial intelligence with human rights," Big Data & Society, vol. 10, no. 2, 2023, Art. no. 20539517231205476, doi: 10.1177/20539517231205476.
2. B. Enke, et al., "Cognitive biases: Mistakes or missing stakes?," Rev. Econ. Stat., vol. 105, no. 4, pp. 818-832, 2023, doi: 10.1162/rest_a_01093.
3. C. Orwat et al., "Normative challenges of risk regulation of artificial intelligence," NanoEthics, vol. 18, no. 2, p. 11, 2024, doi: 10.1007/s11569-024-00454-9.
4. N. Suzor, "Digital constitutionalism: Using the rule of law to evaluate the legitimacy of governance by platforms," Social Media + Society, vol. 4, no. 3, p. 2056305118787812, 2018, doi: 10.1177/2056305118787812.
5. S. Jasanoff, "Constitutional moments in governing science and technology," Science and Engineering Ethics, vol. 17, pp. 621-638, 2011, doi: 10.1007/s11948-011-9302-2.
6. G. Currie and D. Spyridonidis, "Interpretation of multiple institutional logics on the ground: Actors’ position, their agency and situational constraints in professionalized contexts," Organization Studies, vol. 37, no. 1, pp. 77-97, 2016, doi: 10.1177/0170840615604503.
7. M. H. Jarrahi, "Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making," Business Horizons, vol. 61, no. 4, pp. 577-586, 2018, doi: 10.1016/j.bushor.2018.03.007.
8. L. Langman, "From virtual public spheres to global justice: A critical theory of internetworked social movements," Sociological Theory, vol. 23, no. 1, pp. 42-74, 2005, doi: 10.1111/j.0735-2751.2005.00242.x.
9. J. Varelius, "Neuroenhancement, the criminal justice system, and the problem of alienation," Neuroethics, vol. 13, no. 3, pp. 325-335, 2020, doi: 10.1007/s12152-019-09427-2.
10. S. Wojtczak, "Endowing Artificial Intelligence with Legal Subjectivity," AI & Society, vol. 37, pp. 205–213, 2022. doi: 10.1007/s00146-021-01147-7, doi: 10.1007/s00146-021-01147-7.