I'm a chatgpt power user and I use it for curriculum development and marketing. I have no background in math, programming, or basically any of the technical stuff you share.
Do you think giving my chat the AlphaResearch or RVLR papers would increase its ability to generate novel ideas or perspectives on my actual subject matter?
Or do these research papers contain things that that only works with some kind of proprietary research system?
It's not clear to me what happens after these papers are published - the headlines always sounds like good advancements, but it's not clear to me what happens after they're published and who uses/benefits from there.
They are usually helpful for you as an individual if you work with and build AI systems as an AI engineer. It also helps if you’re into AI research and you can borrow ideas from them.
Is there any circumstance where you give a standard LLM some math equations to help improve its performance or output quality or memory or anything like that? Like have it ingest some equations and use those to inform how it thinks of its response.
I'm a chatgpt power user and I use it for curriculum development and marketing. I have no background in math, programming, or basically any of the technical stuff you share.
Do you think giving my chat the AlphaResearch or RVLR papers would increase its ability to generate novel ideas or perspectives on my actual subject matter?
Or do these research papers contain things that that only works with some kind of proprietary research system?
It's not clear to me what happens after these papers are published - the headlines always sounds like good advancements, but it's not clear to me what happens after they're published and who uses/benefits from there.
They are usually helpful for you as an individual if you work with and build AI systems as an AI engineer. It also helps if you’re into AI research and you can borrow ideas from them.
Ok thank you.
Is there any circumstance where you give a standard LLM some math equations to help improve its performance or output quality or memory or anything like that? Like have it ingest some equations and use those to inform how it thinks of its response.
An LLM can be prompted for it but overall the use of a mathematical equation in a prompt is not required