- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
In my experience, 100% of executives don’t actually know what their workforce does day-to-day, so it doesn’t really surprise me that they think they can lay people off because they started using ChatGPT to write their emails.
This was my immediate thought too. Even people 2-3 levels of management above me struggle to understand our job let alone the person 5-6 levels up in the executive suite.
At my last job my direct manager had to explain to upper management multiple times that X role and Y role could not be combined because it would require someone to physically be in multiple places simultaneously. I think about that a lot when I hear about these corporate plans to automate the workforce.
However, people saying that C-suite can be replaced with GPTs don’t understand that plenty of people not in C-suite could be replaced or not replaced just as well. Lots of office plankton around with such reasoning skills that I just don’t know how their work can bring profit.
I can’t decide whether those people are really needed or they are employed so that they wouldn’t collectively lynch those of us who’d keep relevance, but wouldn’t be social enough to defend from that doom.
The problem with building hierarchies of humans is with humans politicking and lying and scheming with each other, not even talking about usual stuff like friendship and sympathy and their opposites. It’s just impossible to see what’s really happening behind all that.
Can’t wait for AI to replace all those useless execs and CEOs. It’s not like they even do much anyways, except fondling their stocks. They could probably be automated by a markov chain
If they could replace project managers that would be nice. In theory it is an important job, but in practice it’s just done by someone’s mate who was most productive when they don’t actually turn up.
The Paranoia RPG has a very realistic way of determining who gets to be the leader of a group. First, you pick who’ll do what kind of job (electronics, brute force, etc). Whoever didn’t get picked becomes the leader, as that person is too dumb to do anything useful.
Yes that’s quite a funny and satirical way of doing it but it’s probably not actually the best way in real life.
I think Boeing have proven this quite nicely for everyone, the company was much better off when they had actual engineers in charge. When they got corporate paper pushes everything went downhill.
I have been on enough projects where engineers were in charge that went to hell to know that isnt always a solution. And yes I am an engineer.
One of the projects I am on now the main lead is full PE civil and its a manmade clusterfuck well behind schedule, overbudget, and several corporate bridges burned. Haven’t even started digging yet.
By far the very biggest cluster fuck I was ever on was run by a Chemical Engineer. A 40 million dollar disaster that never should have been even considered.
Being good at technical problems (which frankly most of us aren’t) doesn’t mean you know how to do anything else.
I have had good ones and not so good ones.
I swear people don’t know the difference between a good project manager and a bad one, or no one.
Everyone on here is on about how the.board has no idea what the bottom rungs of the ladder do and are all “haha they are so stupid they think we do nothing”. Then in the next sentence say they don’t know what the board does and that they just do nothing.
Project managers on board members what the hell you want about
People slagging off jobs they don’t understand.
Both project managers that they probably have experience with dealing with but don’t understand and board members they probably don’t have any experience with and also don’t understand.
Board members don’t do shit
I see.
What is this judgment based on?
First hand experience
Don’t get a job in government contracting. Pretty much I do the work and around 5 people have suggestions. None of whom I can tell to fuck off directly.
Submit the drawing. Get asked to make a change to align with a spec. Point out that we took exception to the spec during bid. Get asked to make the change anyway. Make the change. Get asked to make another change by someone higher up the chain of five. Point out change will add delays and cost. Told to do it anyway. Make the next change…
Meanwhile every social scientist “we don’t know what is causing cost disease”
I predict a huge demand of workforce in five years, when they finally realized AI doesn’t drive innovation, but recycles old ideas over and over.
I predict execs will never see this despite you being correct. We replaced most of our HR department with enterprise GPT-4 and now almost all HR inquiries where I work is handled through a bot. It daydreams HR policies and sometimes deletes your PTO days.
But can you convince it to report itself for its violations if you phrase it like it’s a person?
No unfortunately. A lot of us fucked with it but it keeps logs of every conversation and flags abusive ones to management. We all got a stern talking to about it afterwards.
“Trust your tools”. Not my fault the hammer was replaced by a banana.
I give you permission to replace HR with chatgpt. It just can’t be any worse.
Yeah the 59% in this survey are going to end up pretty successful and buy out the 41%
but recycles old ideas over and over.
I am so glad us humans don’t do that. It’s so nice going to a movie theater and seeing a truly original plot.
these are the same people who continue to use monetary incentives despite hard scientific evidence that it has the opposite effect from what is desired. they’re not gonna realise shit.
The ones refusing to give raises and also being shocked and complain bitterly about loyalty when people quit for a higher wage somewhere else.
Seems to be working in Hollywood films for the last 20 years
“Workforce” doesn’t produce innovation, either. It does the labor. AI is great at doing the labor. It excels in mindless, repetitive tasks. AI won’t be replacing the innovators, it will be replacing the desk jockeys that do nothing but update spreadsheets or write code. What I predict we’ll see is the floor dropping out of technical schools that teach the things that AI will be replacing. We are looking at the last generation of code monkeys. People joke about how bad AI is at writing code, but give it the same length of time as a graduate program and see where it is. Hell, ChatGPT has only been around since June of 2020 and that was the beta (just 13 years after the first iPhone, and look how far smartphones have come). There won’t be a huge demand for workforce in 5 years, there will be a huge portion of the population that suddenly won’t have a job. It won’t be like the agricultural or industrial revolution where it takes time to make it’s way around the world, or where this is some demand for artisanal goods. No one wants artisanal spreadsheets, and we are too global now to not outsource our work to the lowest bidder with the highest thread count. It will happen nearly overnight, and if the world’s governments aren’t prepared, we’ll see an unemployment crisis like never before. We’re still in “Fuck around.” “Find out” is just around the corner, though.
Even mindless and repetitive tasks require instances of problem solving far beyond what a.i is capable of. In order to replace 41% of the work force you’ll need a.g.i and we don’t know if thats even possible.
Let’s also not forget that execs are horrible at estimating work.
“Oh this’ll just be a copy paste job right?” No you idiot this is a completely different system and because of xyz we can’t just copy everything we did on a different project.
Or salesmen. “Oh, you have that another system to integrate with? No, no change in estimates, everything is OK.”
Then they have a deal concluded etc, and then suddenly that information reaches the people who’ll be actually doing it.
It was 41% of execs saying workforce will be replaced, not 41% of workforce will be replaced
Its not replacing people outright its meaning each person is capable of doing more work each thus we only need 41% the people to achieve the same task. It will crash the job market. Global productivity and production will improve then ai will be updated repeat. Its just a matter of if we can scale industry to match the total production capacity of people with ai assistance fast enough to keep up. Both these things are currently exponential but the lag may cause a huge unemployment crisis in the meantime.
In this potential scenario, instead of axing 41% of people from the workforce, we should all get 41% of our lives back. Productivity and pay stay the same while the benefits go to the people instead of the corporations for a change. I know that’s not how it ever works, but we can keep pushing the discussion in that direction.
You and I know damn well that a revolution is the only way that’s gonna happen, and there aren’t any on the horizon.
What do u replace it with after a revolution? Communism doesnt work capitalism is flawed democracy is flawed but seems to at least promote our freedoms. I think we defiantly need a fluid democracy before we can start thinking about how we solve the economic problems (well other than raising minimum wage that’s a no brainer) without undermining exponential growth.
Capitalism isn’t just flawed, it’s broken. For every prosperous nation like the UK or Germany, there’s half a dozen Haitis and Panamas.
By “communism”, I presume you mean Marxist-Leninist state socialism, which indeed fails miserably. However, it isn’t the only alternative to capitalism. Historically, there have been several communes during the Spanish and Russian civil wars that worked fine and didn’t have a central leader, let alone a dictatorship. Although they died because of military blunders, this model is currently being followed more or less in Chiapas by the Zapatistas.
In these places, workers’ councils ruled. Direct face-to-face democracy by neighbours were how most things were done. I recon that this is a fairly nice arrangement.
Democracy’s flaws come from subversion by the wealthy and the fact that republics don’t let people really participate, but rather choose people who participate in their place.
We are walking talking general intelligence so we know it’s possible for them to exist, the question is more if we can implement one using existing computational technology.
I’ve worked with humans, who have computer science degrees and 20 years of experience, and some of them have trouble writing good code and debugging issues, communicating properly, integrating with other teams / components.
I don’t see “AI” doing this. At least not these LLM models everyone is calling AI today.
Once we get to Data from Star Trek levels, then I can see it. But this is not that. This is not even close to that.
People are always enthusiastic about automating others’ jobs. Just like they are about having opinions on areas of knowledge utterly alien to them.
Say, how most see the work of medics.
And the fact that a few times in known history revolutions happened makes them confident that another one is just behind the corner, and of course it’ll affect others and not them.
You know what I like about Pareto law and all the “divide and conquer” algorithms? You should still know where the division is and which 10% are more important than the other 90%.
Anyway, my job is in learning new stuff quickly and fixing that. Like of many-many people, even some non-technical types really.
People who can be replaced with machines have already been for the most part, and where they can’t, it’s also a matter of social pressure. Mercantilism and protectionism and guilds historically were defending the interests of certain parties, with force too.
No, I don’t think there’ll be a sudden “find out” different from any other period of history.
Hahahaha, good one
just 13 years after the first iPhone, and look how far smartphones have come
I disagree.
As someone who has the first iPhone, it was amazing and basically did everything that a new one does. It went on all websites, had banking apps and everything.
I would actually argue phones have become worse, they are very bloated and spy on you, at first they actually made your life better and there was no social media apps super charged for addiction.
Hype hype hype hype hype.
Hilarious L take
You know what I love about blocking people?
The Oncology pharma companies would love that! Every time I google symptoms I swear…
59% of execs are wrong.
They’ll be replaced with AI
I think that’s a little low.
If Gartner comes out with a decent AI model, you could replace over half of your CIOs, CISOs, CTOs, etc. Most of them lack any real leadership qualities and simply parrot what they’re told/what they’ve read. They’re their through nepotism.
Also, most of them use AI as a crutch, so that’s all they know. Meanwhile, the rest of us use it as a tool (what it’s meant to be).
Christ, if you think a CTO is hard to deal with, wait until you have to interface with the AI CTO.
As long as i can prompt-engineer my way into twice the salary for half the hours, that might still be worth it!
simply parrot what they’re told/what they’ve read.
That’s exactly what an LLM is
But the AI can do it cheaper
But their job is to be the fall guy.
Yup. The owners can save a lot of money on those paychecks.
Thankfully I don’t even wanna work. I just wanna live and if that’s not possible, exist.
Same. I welcome our AI overlords as long as that means I can just stay at home and fully embrace my autism by not giving a fuck about the workforce while studying all of the thousands of subjects I enjoy learning about.
I say AI overlords might be an improvement over the human overlords that have persisted throughout human history.
The AI overlords will be trained on data based on human overlords decisions and justifications. We are fucked, my man.
They won’t be though because the managers don’t know anything about AI. People who actually train the AI will be some poor sap in IT who’s been lumbered with a job they don’t want, because AI is computers right.
So I’m going to train it on good stuff written by professionals, Star Trek episodes, and make it watch War Games.
The managers don’t even have any data sets the AI could absorb anyway because most of their BS is in person, and so not recorded for analysis.
Oh my. I see you don’t know mich about the hell called key performance indicators…
Key performance indicators will be what will turn our AI overlords into AI tyrants. And there is so so much data available for training the AIs.
The autism is not required. No one cares about their jobs, especially people who work in jobs where “everyone is a family”. People care about those jobs the least.
I will never care if AI takes mandatory work from me, but I want income replacement lol. Seriously though I hate working so much every job I’ve ever had has made me suicidal at some point. I’m glad there’s a chance at least I won’t have nothing but work and death ahead of me. If that’s all that’s left it’s okay, a little disappointing but it is what it is.
Not a thing til the revolution, dear.
People here keep belittling AI. You’re all wrong, at least when considering the long run… We can’t beat it. We need to outlaw it.
Train it to replace CEO’s.
It’s Schrödinger’s AI. It is both useless and will replace everyone. Depending on the agenda the particular person is trying to push.
We need to outlaw it.
Train it to replace CEO’s.Oh, there it goes again.
I know it’s getting boring. I am tried of people telling me how chatgpt and friends are toys that just spit back website data and in the same comment telling me how they are basically angry gods ready to end the human race.
Fucking make up your mind!
“Smash the looms” is the wrong idea.
“Eat the rich” might have some merit though.
Yeah, don’t smash the looms, seize them. The ability to make labor easier and more efficient is a positive if we don’t allow it to be a means to impoverish the workers
Nah, I disagree on both counts.
We can’t beat it. We need to outlaw it.
Is the intent here to preserve jobs even if it’s less productive? That’s solving the wrong problem. Instead of banning it, we should be adapting to it. If AI is more efficient than people, the jobs people take should change.
I think there’s a solid case that if something would devolve into rent-seeking because competition is unproductive, it should be provided as a public service. Do you need a job if all of your basic needs are met by AI? At that point, any work you do would be optional, so people would follow their passions instead of working to make ends meet (see: Star Trek universe).
Think of it like Basic Income, but instead of cash, you’d get services at-cost. I think there’s room for non-profits (or maybe the government) to provide these AI-services at-cost.
Outlawing it is a very dangerous aim, because outlawing it completely will enable other countries to out-compete us, and a outlawing it completely is right next to “outlaw it for normal people, but allow companies to exploit it for profit” on the dart board of possibilities.
Better path all around is “allow everyone to use AI and establish strong social safety nets and move towards enabling people to work less”.
Haven’t I been hearing that since the rise of computing and the internet? And it’s probably been around even longer. Seems like this sort of stuff only gets going when a lot of workers start putting up a fight.
But hey, maybe 41% jobs lost might be the tipping point. Because people aren’t just gonna sit on the sidewalk and starve.
If AI is outlawed, only outlaws will have AI
Y’all are dumbass doomers. Have some fun with AI while your can you some aged peasants. We were always fucked.
It’ll reduce the workforce from well-remunerated professionals who perform tasks to a larger number of disposable minimum-wage labourers who clean up botshit.
Pretty sure the entire Republican party and the ruling class they serve just orgasmed at that thought.
AI will remove 41% of execs, say 100% of people who know what AI is.
Say execs. You know, the people who view labor as a cost center.
They say that because that’s what they want to happen, not because it’s a good idea.
And only 41%.
I’ve advised past clients to avoid reducing headcount and instead be looking at how they can scale up productivity.
It’s honestly pretty bizarre to me that so many people think this is going to result in the same amount of work with less people. Maybe in the short term a number of companies will go that way, but not long after they’ll be out of business.
Long term, the companies that are going to survive the coming tides of change are going to be the ones that aggressively do more and try to grow and expand what they do as much as possible.
Effective monopolies are going out the window, and the diminishing returns of large corporations are going to be going head to head with a legion of new entrants with orders of magnitude more efficiency and ambition.
This is definitely one of those periods in time where the focus on a quarterly return is going to turn out to be a cyanide pill.
Yup, and there’s a lot you can do to increase productivity:
- less time wasted in useless meetings - I’ve been able to cut ours
- more time off - less burnout means more productivity
- flexible work schedules - life happens, and I’m a lot more willing to put in the extra effort today if I know I can go home early the next day
- automate the boring parts - there are some fantastic applications of AI, so introduce them as tools, not replacements
- profit sharing - if the company does well, don’t do layoffs, do bigger bonuses or stock options
- cut exec pay when times get hard - it may not materially help reduce layoffs, but it certainly helps morale to see your leaders suffering with you
And so on. Basically, treat your employees with respect and they’ll work hard for you.
Short term is all that matters. Business fails? Start another one, and now you have a bunch of people that you made unemployed creating downward pressure on labor prices.
No, you have a lot of people you made unemployed competing with you.
This is already what’s happening in the video game industry. A ton of people have lost their jobs, and VC money has recently come pouring in trying to flip the displaced talent into the next big success.
And they’ll probably do it. A number of the larger publishers are really struggling to succeed with titles that are bombing left and right as a result of poor executive oversight on attempted cash grabs to please the short term market.
Look at Ubisoft’s 5-year stock price.
Short term is definitely not all that matters, and it’s a rude awakening for those that think it’s the case.
Mostly the execs don’t care. They’ve extracted “value” in the form of money and got paid, that’s the extent if their ability to look forward. The faster they make that happen the faster they can do it again, probably somewhere else. They don’t give a single shit what happens after.
It really depends on the exec.
Like most people, there’s a range.
Many are certainly unpleasant. But there’s also ones that buck the trend.
Yeah, and there are a few good lawyers and a few good cops and (probably) a few good politicians too, but we’re not talking about the few exceptions here.
Well, we kind of are as the shitty ones tend to fail after time and the good ones continue to succeed, so in a market that’s much more competitive because of a force multiplier on labor unlike anything the world has seen there’s not going to be much room for the crappy execs for very long.
Bad execs are like mosquitos. They thrive in stagnant waters, but as soon as things get moving they tend to reduce in number.
We’ve been in a fairly stagnant market since around 2008 for most things with no need for adaptation by large companies.
The large companies that went out of business recently have pretty much all been from financial mismanagement and not product/market fit like Circuit City or Blockbuster from the last time adaptation was needed with those failing to adapt going out of business.
The fatalism on Lemmy is fairly exhausting. The past decade shouldn’t be used as a reference point for predicting the next decade. The factors playing into each couldn’t be more different.
How do you arrive at effective monopolies are going out the window, squaring it with what we see in the world today which runs counter.
There’s diminishing returns on labor for large companies and an order of magnitude labor multiplier in the process of arriving.
For example, if you watched this past week’s Jon Stewart, you saw an opening segment about the threat of AI taking people’s jobs and then a great interview with the head of the FTC talking about how they try to go after monopolistic firms. One of the discussion points was that often when they go up against companies that can hire unlimited lawyers they’ll be outmatched by 10:1.
So the FTC with 1,200 employees can only do so much, and the companies they go up against can hire up to the point of diminishing returns on more legal resources.
What do you think happens when AI capable of a 10x multipler in productivity at low cost is available for legal tasks? The large companies are already hiring to the point there’s not much more benefit to more labor. But the FTC is trying to do as much as they can with a tenth the resources.
Across pretty much every industry companies or regulators a fraction of the size of effective monopolies are going to be able to go toe to toe with the big guys for deskwork over the coming years.
Blue collar bottlenecks and physical infrastructure (like Amazon warehouses and trucks) will remain a moat, but for everything else competition against Goliaths is about to get a major power up.
Freeing humans from toil is a good idea, just like the industrial revolution was. We just need our system to adapt and change with this new reality, AGI and universal basic income means we could live in something like the society in star trek.
I’m sure that’s what execs are talking about.
Doesn’t matter what the execs say, it will happen and it will become easier and easier to start your own business. They are automating themselves out of a high paying job.
41% execs think that a huge amount of class power will go from workers in general to AI specialists (and probally the companies they make or that hire them).
I personally can’t wait for a lot these businesses that bet on the wrong people to replace turn around and form new competition but with this new tech filling in the gaps of middle management, hr, execs, etc.
I mean its fucking meme, but an AI assisted workplace democracy seems alright to me on paper (the devils in details).
Execs don’t give a shit. They simply double down on the false cause fallacy instead. They wouldn’t ever admit they fucked up.
Last year the company I work for went through a run of redundancies, claiming AI and system improvements were the cause. Before this point we were growing (slowly) year on year. Just not growing fast enough for the shareholders.
They cut too deep, shit is falling apart, and we’re loosing bids to competitors. Now they’ve doubled down on AI, claiming blindness to the systems issues they created, and just made an employee’s “Can Do” attitude a performance goal.
You sound like you work from one of my part suppliers
Lets try it. I am willing to start a worker coop headed by votes and an AI. Fuck it.
Well it’s good to know 59% of execs are aware that AI isn’t gonna change shit
Some of that 59% might, but I guarantee at least some very strongly think it will change things, but think the change it brings will require as many people as before (if not more), but that they will be doing exponentially more with the people they have.
Can AI replace executives too?
Yes.
The biggest factor in terms of job satisfaction is your boss.
There’s a lot of bad bosses.
AI will be an above average boss before the decade is out.
You do the math.
Yes. And it will.
As soon as we’ve managed to make a computer that can simulate an entire brain in real time. Who knows how many decades or even centuries will that take.
No. Middle management is a lot of repeating tasks that an AI could do. The thing is that were not talking about replacing all middle management, we’re talking about giving 10% of the managers the tools to run 90% of the repetitive, tedious and boring tasks.
To replace a corporate executive? No, I don’t think so. We already have algorithms more than capable of replacing CEOs. There is nothing that challenging in what they do…
The challenge is to not do whatever the optimal algorithm says. If they simply did what an algorithm says, it would be very easy for competitors to predict.
The challenge comes in being a scapegoat for when things go wrong (albeit a goat with a golden parachute) and a hype man for when things go right.
But as others have said AI won’t replace executives because it’s executives making the decisions to use AI, and no one with power will ever choose an option that reduces their own money.
You make it sound like corporations invent a new revolutionary wheel each quarter. They don’t.
What fantastic new beverage have Coca Cola launched the last couple of years? What astonishing new car technology has GM or Volkswagen released lately?
Most companies are doing what they’ve always have done and guarding their market share. Now and then some small competitor with something revolutionizing pops up and either starts eating market share it gets aquired by one the bigger ones.
So between a competition popping up or one of your engineers coming up with a lucky accident, all you do is to manage the business as you always do.
It’s amazing how this delusion gets repeated so much in here. Absolute unhinged shit.
I never had the impression that there were enough people for the amount of work anyways. I don’t see jobs go, but shift. Most developers will be fine, because of never ending work, AI is just a tool speeding things up. But not that much, as someone who is good with Google and git, is just a bit slower to find the same answers. And AI needs verification too, even if it links you directly to the issue at hand, via source url.
AI will create new issues. Some of the low level requirement jobs will go, like working in first level support, but only if you learn the AI yourself, else it’s too generic. We’re not there yet, where companies learn their own LLM yet. some outlier try.
We got to understand that there’s still a human layer and a lot of people might prefer calling a human, even if the result is worse, simply because we’re social beings. This can cost a lot of customers, if companies believe they can just shove an AI in front.
No one really knows how good AI will get. As the technology advances, we find more and more hard to solve issues, for instance that AI will make things up or gives wrong answers, despite knowing the real answer, if you pressure hard enough.
Also for security reasons you can’t add AI everywhere, unless you want to send all secrets directly to Microsoft, Google or Facebook.
My 5 cents.
After reading this article that got posted on Lemmy a few days ago, I honestly think we’re approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that’s not really feasible. We’ve already scraped pretty much the entire internet to get to where we are now, and it’s nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.
We also can’t ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don’t have AI explicitly curate its own dataset, it’s highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.
We also can’t just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it’s just being masked by VC money subsidizing the cost). Even if cost wasn’t an issue, we’re also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.
So we already have a pretty good idea what the answer to “how good AI will get” is, and it’s “not very.” At best, it’ll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It’s marginally better than the old memes about “I trained an AI on X episodes of this show and asked it to make a script,” but not by much.
As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough–something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that’s even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general–the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.
In the meantime, what I’m most worried for are the people working for idiot CEOs who buy into the hype, but most of all I’m worried for artists doing professional graphic design or video production–they’re going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I’ve heard that pays well~
I find the way that you write peculiar, in a good way. I mean no offense, but is English your secondary language?
Yeah, it’s my second language. Sorry I wrote it a minute before bed, sometimes sentences become even weirder then. I went back and added some more commas. Haha