The principles of precaution and beneficence are introduced by the Nuffield Council on Bioethics (2013) in its proposed ethical framework for novel neurotechnologies. The principle of precaution does not mean that research should only proceed when in the absence of any risk of harm to participants. Rather, it is recognition of the need to accept some risks, and uncertainties around them, provided that the research could confer a significant benefit to existing patients or advances public good. In turn, the principle of beneficence relates to the responsibility to do good where possible. While these principles require risks to participants to be identified and minimised, and participants should be carefully selected to minimise risks and enhance benefits, it is less clear if a participant should be involved in the research where there is no conceivable benefit but the prospect of harm is likely to be minimal. This paper considers ethical contentions that arise from the involvement of especially vulnerable persons (tetraplegic patients, for instance (Clausen et al., 2017)) in research that involves the implantation of neurodevices with the intent of enabling these persons to communicate and/or control an external assistive device in real time. As many of these novel neurotechnologies are mediated by artificial intelligence (AI) programs, key ethical and regulatory implications are also examined in this context, with focus on the ethical roles and responsibilities of ethics review committees and research funders in meeting the ethical requirements of caution and benefit.