When it comes to AI replacing human jobs under the assumption that progress will continue similarly or more rapidly
Lately, I think (or I cope? ) that the current AI systems are inherently quite different from human intelligence, as essentially a different form of intelligence, where there is some convergence with human intelligence but not completely, and I feel like I don't see enough evidence that the trend is changing sufficiently towards human intelligence, where I see more the emergence of differently useful patterns in information processing compared to human information processing, where AI systems are already better in some aspects but totally flop in other aspects (but which changes and improves over time), where they are often are also differently specialized:
So that even if they automate a lot of parts of the human economy, for example software engineering, then human intelligence will still useful for some subset of the job, e.g. where human intelligence is still different from machine intelligence and thus possibly useful, or for error correction, or for giving the AI the tasks, or for more human-like communication with clients, or other jobs will emerge (we already see jobs like "AI pilots" and "AI output verifies and fixers" start to arise in some industries, and prompt engineering in the style of writing a lot of pages concrete specifications for the AIs).


