In the past, I wrote about copyright and openness. It was said that there should be a mechanism where open innovation is profitable for those who innovate so they are encouraged to keep innovating. The idea of copyright is to make sure innovators have incentives to innovate but I discussed the negative aspects of closeness.
Clearly, I always support openness but the prerequisite for openness is to make sure, it still encourages innovation. Until a robust mechanism replaces copyright mentality with an open mindset, I’ll take the advice of those who say I shouldn’t write everything here before the overall platform is complete. I personally think writing everything here is a good practice because this project is tight with the reputation system and other elements of the platform and if other people copy one element of the project, it’s not really very important because they cannot go on and developed the reputation system and other crucial elements of the project.
I’ll work a few more days on the basic ideas and then make a presentation of the project (phase one) and contact the Guardian.
UPDATE 1:
It’s been pointed out that generative AI might produce non-sensical outputs. If it’s fed with wrong, biased inputs it still generates an output based on them. The other approach can be based on an evolutionary mechanism where some strategies (or inputs) retain their development while competing strategies (or inputs) stall. As a result, non-sensical inputs are pushed aside by the evolutionary mechanism, but more coherent inputs will be much more influential in the output. So AI generates several competing initial outputs, and then they compete with each other and evolve.
“Coherent” is very unclear here. The evolutionary mechanism is unclear too. this project can be a social experimentation to guide researchers on how it might work in a social context. Currently, it’s unclear how generative AI produces outputs. This is one of the main concerns about AI. This project is the opposite. It doesn’t incorporate generative AI very much (at least in the initial phases), instead, it constructs several layers (similar to deep learning) but each layer consists of groups of people. The aim is to understand which evolutionary mechanism works best in this environment and if it can be used in generative AI.
(What I said about openness doesn’t hold for research and scientific topics – It will have very positive impacts if the scientific community, all around the world, is based on open/ democratic processes. When this approach proves to be more productive, even authoritarian systems will welcome this approach in certain areas).