Thanks for sharing, interesting read and questions. Surely you'll be down voted here for anything with AI... But c'est la vie.
Ive been doing coding projects in VS code which uses GPT, Claude and Gemini. Woe are the days when my credits are used and only GPT 4.1 is available. Claudes ability to research and architect multi step software solutions is very, very good and it rarely makes messes or spins tires compared to older models from just a few months ago. This is precisely what converted me to 'whoa - ai' which is adjacent to 'pro ai'.
Lately I've been experimenting with customizing Gemini via instructions which include a link to a drive folder of md files with specific instructions for different agent tasks, such as performing specific market analysis, doing a news roundup with a specific list of topics and omitting prior reviewed items, etc. The files allow for both complex instructions or lists, as well as some chance to construct memory via logging. Results are a mixed bag, lots of additional function created, lots of mixed results.
Have you considered any tests of more complexity? Something like 'write a program that...' I think what will differentiate these models going forward is some have architect capabilities, strategy, insight, decision making, where others are agents - they do specific tasks well but have limits. With that model, the ai architect and it's ai agents need to work as a team to complete a multi step task.
I have been paid for and paid for CAD and other impacted engineering products including software. AI is unfair... How?
I can see some issues with copyright, and I acknowledge it will upset economies.
But being able to 'automate' photo to 3d, and so many other tools that enable me so much to do things that wouldn't have happened before, it's unimaginable to many around me. Change is scary, strap in.
I tried Meshy and Trellis and Hitem. Next I'll try printmon.
Hitem has the best free option and portrait mode. Made some great busts.
Meshy looked great if cartoony with its model 6, but only let me download from model 4, which was a surprise and made monsters.
Trellis was in between and I ran out of huggingface tokens quickly.
I'd use hitem all day but not interested in paid subscriptions for my passing hobby. Hoping Bambu is handy with printmon again, though I expect it may be proprietary.
Also, all needed some cleanup, Bambu could fix slice and print but not adjust details and cuts. Started with Blender but the interface is hard for a CAD person. Switched to using Meshmixer, which works great, and just using Bambu to do a final fix of the stl before slice. Make solid is a helpful tool in Meshmixer if you get a good looking but imperfect stl, but you have to be careful to avoid losing detail.
If we're a simple 'normal' population, your wife's idea holds; there should be 1 in 1000 athlete in every 1000 people. to get a 1 in 1000 athletic performance with a 50% confidence you need only take 693 samples. So if many thousands have played, you'd expect to have seen peak performance.
But we aren't distributed like that. Z score analysis of a measurable sport indicates a known top athlete like Usain Bolt is in the order of 5 standard deviations from the norm (depending what we consider the norm data set). That's more like 1 in a million to one in 10 million to get a Bolt. Which implies millions need to try (and train) to get a Bolt level performance (3 humans in that tier so far, implies between 3 & 30 million have tried). So a Bolt seems to be reaching human limits, reinforcing the wife position position for that sport - we are approaching the human limit.
But wait - that is a popular sport, with a single simple measure. If there were multiple relevant independent measures (say hitting and pitching, or running and throwing), even just 2, the odds become astronomical of finding the best. A dual 1 in 1000 is a 1 in a million. A dual z=5 athlete is 1 in 12 trillion.
So the implication is that for more complex sports where multiple attributes apply, it is much more likely we have not yet seen peak human capabilities. It's also much harder to measure and recognize when we do - so props to the legendary players, and keep searching for them. We won't know how good they really were until we sample (play) the sport for hundreds or thousands of years. Finding peak is incredibly lucky/unlikely for our most popular complex sports.
Thanks for the feedback. For clarity, Cape is offering a GrapheneOS installed out of box to the user for a surcharge.
This is what connected the title:
https://www.cape.co/blog/cape-supports-grapheneos
2 months isn't that long and you should keep your head up and keep trying. Discouragement and lack of effort are the enemy.
I would add, consider your target industries. Different industries have different cycles and levels of available positions. If you're mostly looking in retail, this might not be the right economy or time of year, etc. One industry that usually has high demand and might overlap with psychology is health care. Assisted living, home health care, and many related non-medical care environments have consistent staffing challenges and don't require specific degrees in nursing or medical, etc. I paid my way through college that way and learned a lot of life lessons, including the reasons that work isn't for everyone. YMMV
There are probably some other under employed unglamorous jobs in your area if you look with fresh eyes. And as others said, volunteering some free time could be a win win, doing stuff keeps the spirit up and being involved creates opportunities.
I have been to the science fair, and the county science fair, and the state science fair.
No, I didn't touch my daughters project.
At county, there was an obvious element of parent projects, but judges interviewed kids and weeded out those who didn't know much about the project. Some winners there still had obvious assists, but at least they could interview.
State was wild (CA). No parents in the hall during the day. Kids reported massive judging variations, little standardization and obvious tech bias. Her cognitive science category gave out all 3 awards for AI related projects.
Check in was insane. Allowed material were the board and a few feet of space on the table. People were pulling in with trailers. Massive arguments, tears.
Day of, kids were wearing fitted suits. Coordinated family outfits with ostentatious wealth on show. What a bizarre view of America.
This is another really good reason to be upset with the 10 yr warranty. It implies a longevity well beyond what this product can do.
And the waste. My god the waste. Piles upon piles of unrecyclable petroleum derived foam. Ok, in relative terms to our modern lifestyle it fits right in, but that's not good.
And if it lasts half as long as they say, and they won't touch it at the end of its life, what does that say?!
Thanks for sharing, interesting read and questions. Surely you'll be down voted here for anything with AI... But c'est la vie.
Ive been doing coding projects in VS code which uses GPT, Claude and Gemini. Woe are the days when my credits are used and only GPT 4.1 is available. Claudes ability to research and architect multi step software solutions is very, very good and it rarely makes messes or spins tires compared to older models from just a few months ago. This is precisely what converted me to 'whoa - ai' which is adjacent to 'pro ai'.
Lately I've been experimenting with customizing Gemini via instructions which include a link to a drive folder of md files with specific instructions for different agent tasks, such as performing specific market analysis, doing a news roundup with a specific list of topics and omitting prior reviewed items, etc. The files allow for both complex instructions or lists, as well as some chance to construct memory via logging. Results are a mixed bag, lots of additional function created, lots of mixed results.
Have you considered any tests of more complexity? Something like 'write a program that...' I think what will differentiate these models going forward is some have architect capabilities, strategy, insight, decision making, where others are agents - they do specific tasks well but have limits. With that model, the ai architect and it's ai agents need to work as a team to complete a multi step task.