maybe they should try listening to their users. Maybe that would increase donations
Those decisions include layoffs, most notably Caroline Henrikson, Creative Director, and, Melissa Wu, Director of Community Development.
Seems they might be listening. If their first two hits are ‘community development’ and ‘creative director’, then we can only assume they have probably come to the same conclusion you have stated.
here’s to hoping. I don’t use gnome personally, I prefer kde. But I’d rather there be more than one big player all the same.
Good riddance.
It’s interesting to see that taking a “My way or the highway” approach seems to have actual repercussions. Almost as if nobody wants to work with you when you do that.
I know that I and many others have donated to KDE due to their vibrance and inclusivity in the conversation. They have panels where they actively ask what it is that users want to see (within the scope of some broader goals they’ve set for the year).
A lot of people have been very upset at the GNOME foundation, and their all-or-nothing decisions. They repelled some of the biggest contributors, to the point where they decided to create their own entire graphical stacks, just to avoid fighting the GTK. I don’t really understand the point behind their decisions
Gnome has different goals than KDE. The idea behind gnome development is to keep it as simple as possible. I think a lot of people around gnome went way to far but libadwaita is the right direction. GTK4 is very powerful despite what people think.
Hopefully the leadership will get more involved with average users. The problem with gnome as it stands is that they don’t have a good understanding of what is actually used. I think they should absolutely not follow KDE as KDE is very cluttered. I like the minimal menus and clean design the problem is that the gnome UI design guidelines don’t specify how much settings to retain. It just says remove stuff that may not be used which is a train wreck.
I wish they would better leverage gnome extensions to do testing. They could have experimental features be extensions that people could try. Combine that with some sort of feedback system and you can rapidly test new things.
I also think the guidelines should specify what is considered necessary as far as options go. Gnome keeps things streamlined and well tested but I’ve noticed some app developers strip out elements they don’t think are necessary that end up causing major issues.
Lastly they should work with the Mint team to combine efforts. I think they have similar end goals and it would be beneficial to work to create shared standards that are used across desktops. I think the Mint team is over reacting about GTK4 but that’s just me.
Possibly, but the articles says they were operating on surplus funds until it ran out. Sounds like because of surplus funds they weren’t actively looking for new sources of income…until oops pocket is empty
Yeah I was a Gnome user until Gnome 3. That was so unusable I switched to xfce and later Mate. There insistence on that big bloated touch screen interface on a primarily desktop UI was so stupid and cost them users.
Now not enough people care if they stick around to fund them.
Ugh! I used to only use Gnome. KDE just seemed like a mess. Gnome 3 came out and I tried to like it but decided to take a break. I used classic gnome, mint, xfce and some others that I can’t remember. For years I bounced around. Finally, a year ago, I was ready to give Gnome a shot again. I really tried. Learning the quick key commands, then giving into old style habits and getting all the plugins that’s made everything just right.
Two things happened. First, I updated my OS and some of my 3rd party plugins no longer worked. Second, I didn’t like how I had to install a bunch of community plugins to get basic desktop functionality. I finally realized that what I wanted from a desktop gui was not what gnome vision is and I was forcing my wants with add-ons.
I heard that the Steam Deck uses KDE. So I tried KDE and was really impressed. The messy feel that I disliked seemed to be gone. KDE is currently working well for me.
This is just my experience. I’ve have read plenty of forums with people who seem to like modern Gnome.
Same.
When I started in 2000, I was using the OG KDE. Gnome just felt limited back then. Then Gnome 2 came out and it was perfect. KDE 3 was a fucking mess so I stuck with Gnome.
When Gnome 3 came out, I couldn’t stand it one bit. Even Canonical came up with an alternative with Unity. I stuck with MATE for a long while. Then KDE 4 and 5 came and it was great again. It still has a lot of bugs though. It’s not as stable as Gnome. But at least it’s usable. So I’ve switched back to KDE.
KDE 6 has been rock solid for me, I haven’t had any issues with it yet
I think it may have to do with the NVidia driver and the compositor. It’s been really iffy.
I didn’t like how I had to install a bunch of community plugins to get basic desktop functionality
This seems to be the main gripe with Gnome. I disagree.
This “basic desktop functionality” people are missing in Gnome is usually the standard desktop metaphor. You can hack it back into Gnome with extensions, but that’s not what Gnome is going for.
If you want to have lots of icons on screen that show info and can be clicked on for more info or actions, just use KDE.
Gnome works much better on laptops with a touchpad and no mouse. It’s philosophy is that you only ever really do one thing at a time, so it’s designed to show you the program you’re currently working in and nothing else, until you need something else.My only gripe with it is that the top bar is actually useless, so I use 1 extension to hide it.
Yeah, that’s why I left. I realized I was wanting a different more Win/Mac/KDE(Classic?) experience on my desktop. It’s just hard to leave because I used to really like Gnome back in the day.
Didn’t they just get like a million dollars from the Sovereign Tech Fund?
It comes with strings attached. They can only use it in a limited number of ways. (Assuming I’m remembering correctly)
Also a few million dollars gets eaten up when you are paying actually developers.
Yup.
Say you’re paying just under market rates for mid-range non-web developers: about 90k ea. That costs you half again as much, once you factor in benefits, so 150k ea. A million pays for 6 devs for a year, and leaves you some change.
Then you have operating costs: at least one person in each of HR, finance, legal, and IT ops - at a bare minimum. You have equipment and utility costs. And we haven’t even gotten to management; even if everyone reported to a single person, 10 direct reports is stretching it, and they’re not doing other jobs like networking and seeking other funding, so you need people for that.
In a bare-bones organization paying people less than market rates, a million dollars probably buys the foundation between 3 and 6 months of operating runtime.
As a caveat, you can definitely find cheaper devs than those prices when looking outside of the U.S. 90k would get you pretty close to the top in Sweden for local companies, for example.
The developers get less, but it ends up costing more to employ people in the EU. In the US, the rule of thumb - for white collar, non-executive jobs, at least - is 1.4x the salary for TCE (and it’s often reasonable to round up to 1.5). For EU employees, it’s between 1.5 and up to 1.8. Norway is 1.7; I don’t know what Sweden is, but I’d assumed it’s around the same.
The social welfare benefits are far better in those countries, and it’s companies paying for those in that overhead. The better the social welfare net, the higher the costs. There may be exceptions, but they’re the minority. You want really cheap labor, go to counties with nearly no social welfare.
Offshoring to reduce costs isn’t the point; for the most part, you get what you pay for. Even offshoring to countries with notoriously cheap labor, if you want good programmers, you end up paying much closer to domestic costs. Highly educated, experienced programmers command higher prices, regardless of the country, but when companies offshore labor for cost control, the cost of the labor is usually the most important decision factor and line managers are left with the consequences.
Regardless, the difference in TCE isn’t going to make a huge difference in how far a million dollars goes. People are expensive.
The developers get less, but it ends up costing more to employ people in the EU. In the US, the rule of thumb - for white collar, non-executive jobs, at least - is 1.4x the salary for TCE (and it’s often reasonable to round up to 1.5). For EU employees, it’s between 1.5 and up to 1.8. Norway is 1.7; I don’t know what Sweden is, but I’d assumed it’s around the same.
So I get where you’re coming from, but this is really not true, and I’ll provide you with some numbers as to why it is not true.
Let’s check out the median salaries for senior engineers in Stockholm using levels.fyi: https://www.levels.fyi/t/software-engineer/levels/senior/locations/greater-stockholm
As levels.fyi automatically converts to local currency, this is specified as 800k SEK, or 76k USD in today’s currency exchange. We can multiply that by the factor you provided for Norway, giving us 137k USD.
Now let’s plug in the numbers for San Francisco: https://www.levels.fyi/t/software-engineer/levels/senior/locations/san-francisco-bay-area
3.375m SEK, or 321k USD. Using your factor for the U.S, we get 448k USD.
The contrast is of course the largest for San Francisco which is the most high-paying area for engineers, but the thought experiment is basically replicable for any city with a tech scene in the U.S, which is most comparable to Stockholm, which is the most expensive city in Sweden and which has a tech scene.
Essentially the TCE cannot explain the discrepancy in salaries in tech between Europe and the U.S.
Before we inevitably go there, COL does not adequately explain it either - San Francisco is very expensive, but Stockholm is far from being a cheap place to live either. Even when adjusting for this factor, the total amount left after living expenses is quite significantly higher for someone on a U.S salary.
It’s basically a fool’s errand to try to logically explain this discrepancy. The honest answer is that capitalism follows no strict logic, and pay becomes whatever the people with the money can get away with. They just happen to be able to get away with far less in Europe.
You point out that the comparison is unequal, but do you realize how unequal? Stockholm is the 102nd most expensive city in the world; San Francisco is the 13th. If you’re going to compare salaries, at least pick a city closer to Stockholm, like Cleveland, OH (still more expensive at rank 84). 2.4M people live in Stockholm’s greater metro area; San Fran is close to double that size at 4.6M. Cleveland has 3.6M in the greater metro area. Larger populations mean statistically larger employee pools, although economic focus plays a large part.
But regardless, my point was that $1M doesn’t go very far, no matter where you hire your devs, if you at all care about quality. Even your $137k Swedish devs only get the Foundation one more developer, and less pocket change.
And Sweden losses if we’re playing the “cheapest devs” game. If you at hiring and you want to get the most resources for the least money, you’re going to look in Mexico or South America and get the advantage of more time zone overlap for the rest of your organization (if you’re in the US); or you’re going to look in one of the less-well-off EU countries, or even Africa if you’re in the EU. Ukraine was a fantastic place to get great developers at good prices, although they’re unfortunately being fucked over by Russia at the moment. Heck, if your leadership is in the EU, SE Asia doesn’t look so bad time-shift wise, and India has a ton of tech hubs, still relatively cheap labor, and shitty labor laws. China has a great labor pool with highly skilled developers and relatively inexpensive prices.
But we don’t want to play the cost game, right? It isn’t about minimizing salaries - although it’s certainly a consideration, it shouldn’t be the main decision factor. Pool size of quality developers is near the top, but vying for that (IMHO) is time zone overlap. Maximizing within reason the number of hours your team has for meetings, so that nobody has to work outside of hours to hand meetings is critical. Language skill overlap is up there, too. Cost is no higher than fourth, and there might be other things that weigh higher - such as, do you already have a presence in that country. Adding another Ukrainian developer to the couple of guys you already have there might make more sense than hiring someone isolated in Portugal, even if they’re cheaper.
I keep straying off topic, though. Again: the fact is $1MM doesn’t buy you a lot of time. 6 US devs for a year, maybe. Getting them in Sweden might buy an extra two of months of time, which you might very well lose because those people geographically detached, and now you have to contend with cross-national tax and labor laws, which makes your payroll and HR more expensive.
Good riddance.
Well time to go talk to Amazon, Microsoft and Google.
I will say they the current leadership is better than what they had previously.
Yikes, however you may feel about GNOME don’t set the bar so low.
That’s where the money is though. No one else is going to donate millions of dollars.