Sort
Profile photo for Hanno Behrens

Actually not so much.

Of course we had less really bad HLL around like Python, well it was around but mostly unknown, but we had our share of C++ and Java crap, that is totally comparable. Our processors were mostly on 32 bit and we knew that there was time for something new, something bigger and it was a waiting time like for christmas.

We had our first well usable Linux variants from SuSE for example and the toolchain was as fine as it is today when you are talking about Linux/shell and the build-essential chain. We had a very fine LaTeX and VIM and everything.

Our X was working fine and there

Actually not so much.

Of course we had less really bad HLL around like Python, well it was around but mostly unknown, but we had our share of C++ and Java crap, that is totally comparable. Our processors were mostly on 32 bit and we knew that there was time for something new, something bigger and it was a waiting time like for christmas.

We had our first well usable Linux variants from SuSE for example and the toolchain was as fine as it is today when you are talking about Linux/shell and the build-essential chain. We had a very fine LaTeX and VIM and everything.

Our X was working fine and there was already no competition for usability from Windwos 95/98/2000. That was already the Windows crap we know today and it has not evolved much from there. But still at those times 20 years ago it was still, uh, what everybody was used to.

Just look today at that awful requester Windows gives you if you shall select a file from somewhere. This is just crap, this widget and most other widgets have badly aged, too. Always when I have to use that after being used to the fine widgets I have on Linux/Kubuntu/Plasma/Qt5 I get a bad scratch on my skin like I have been infected with some skin parasite that starts to lay eggs beneath the surface.

It is something that leaves me with a big urge for a hot shower afterwards. It is dirty, ugly, something you don’t want to touch.

Back in 2000 this wasn’t there. It was still “relatively” new and we didn’t know much better.

Still in those times we could without much punishment use integers to store memory addresses and whoever did that was three years later rewriting all of his software. I did that luckily only once in my lifetime, this was a terrible idea and I have the excuse that it was one of my first C programs.

The jump to 64bit really changed a lot.

GPU programming still wasn’t a thing. The general purpose core wasn’t invented, it was still just a texture-pipeline not much more. Still I started to use GPU by harvesting the power of filters back then. So I put my data into a data field and let a GPU filter run over it. That was 10k faster than doing the same thing with the CPU, which was something. When later the first CUDA came up that was a revolution that still is shaking the world, especially after OpenCL made that not vendor specific and globally applicable.

It was the phase where I still was thinking very badly about the i386 assembly, which was not the fine code we got with the x86_64 and there was missing a lot of the stuff we already knew that we could get.

The big mainframes finally ran out and the old men were now really noticing that they were sitting on a sinking ship. With 2000 the whole dotcom bubble went from hype to getting real, with the usual healthy and productive crash that comes with that that purges all the crap out of the system.

Linux started its path into the big computing and was slowly taking over server hardware now. Hardly any servers were running without Linux and Apache on the web side at least. Still all of us using Linux were belittled and smiled upon. I had been on Linux since ’94 so I didn’t smile back, I just scratched a mark in my endless book that should remind me of this phase for endless laughter later.

I can be so happily vindictive and resentful as only a German can be. I mean, we invented the climate change hoax back when all the witches were cursing us with bad and cold, rainy summers and the stupid idiots back then were burning all the young women in the 16th century. I never forget that to those witch burners and if I see climate hoaxers today, I feel the same rage towards them like I do when thinking of this awful German history. I do not forget history.

And I can still smell the burning flesh of the innocent. And this is why I do not forget this about Windows. That’s only 20 years ago, not 470. And it is on the same level of unenlightened crap and we suffered under it.

The world was still a dark one and the future was uncertain. It wasn’t really clear that we make it, there were good signs, but it was still unclear.

Also around 2000 the first ATMEL came up, I remember (and still own) my development board, those small computers that later went into the Arduinos. I built my first surveillance drone, tried to sell it to several security firms, nobody was interested. 12 years later that was all the hype, but now it was coming from China, not from Germany.

We still had our DM, Deutsche Mark, but that was changing quickly when the Euro came. And with that the new Europe was on the horizon, we all had high hopes about that. This changed some things about programming here, because now the things were outsourced. Many workplaces were shut down and the coding jobs transferred to the Eastern European neighborhoods. It changed a lot for small local guys like me.

We suddenly were in competition with huge programming pools world wide, that were dumping out lousy software for prices that would not even pay a cleaning woman here. Prices went down rapidly. Software quality too.

The quality repaired itself after some years, but I call those years the years of software famine and miss harvest. A lot of crap flooded the market. I drew myself back to the Linux world, that was where I dumped out a lot of commercial Java code, because I could write it on Linux and sell it on Windows. Which was the only way to survive as a single software dev at those times. I worked on the Linux Kernel and with that put a lot effort into not just everyones but my personal future.

It was a project my life did depend on: Linux.

When Windows XP was launched around 2003 it was the last time I had a Windows box. That crappy thing destroyed in an epic crash like 60% of my software that I had written thus far, the worst crash of my life and it did not only crash the Windows side, it also destroyed my Linux partition with that. I recovered about 40% of the data, the rest was lost forever.

I will never forget that and I suspect that was intentional from Microsoft. For either it was intentionally an attack on dual boot systems or it was an insane level of incompetence on a driver/OS level. I never, never forgot this and since then the only Windows I accept on any of my systems is quarantined away in a virtual box of some kind.

I refuse to run Windows anymore, not even for “fun”. The destruction of 60% of your life is something you do not forget and do not forgive. I mean, hey, that were two independent OS on my system with two independent hard drives and fucking Windows destroyed my data AND my backup with one stroke. Yeah. If I trust that crap anytime again it would be my fault.

Since 2003 I only run Linux, no way back, I burned that bridge forever and it was the best I did all my life. Because a hard decision also makes you better in the things you decide to. I was dumping Java at that same time, somewhere in 2005/6 or so and refused to work for that language and dumped all jobs that depended on that. Because I was feeding with that software Windows and made Windows applications work. I helped my enemy and was with that collaborating with them.

It was against my own interest to do so, and because I don’t do from hate, but from love I put all the time I now had to build me a new future for the Linux world and I digged even deeper into the Linux infrastructure than ever before. In the following year I was a huge destructive Tornado that killed all over my city Window Server systems and databases and replaced them with Linux systems.

I had the plan to let them suffer and I did. My systems were on spot and worked for about 1000,- to 1500,- Euro where my competition was sucking the marrow from the bones of my clients, their solution were at 25000,- at least or higher. I did undercut their prices so drastically and brutally, I wiped out a lot of my competition at that time. Many Windows Freddies lost their jobs and I did that joyfully. I did undercut the prices brutally and delivered better, faster systems. I did also cut all the crap.

It was war.

Or better open season because they were like sitting ducks, they didn’t see me coming and I can be very nasty when I’m competitive.

I won.

It wasn’t a fight, it was a slaughter.

So it was a happy time for me and I fully embraced Linux. I did nothing that could work on Windows or only to destroy it and make the final transition. It was my answer to that declaration of war that was coming from Steve Ballmer, who I utterly hated. He decleare war, fine with us. We took on the fight while he only talked, talked, talked. We went out and killed system, hundreds, thousands. I also kicked some of my Windows-only friends out of their jobs with that. I offered them to join me, they refused, they went into bankruptcy in a year. I wasn’t making prisoners. I took that declaration of war very serious. My revenge for the loss of my data was fiercely, I actually felt personally attacked by that, while before I only was a “freak” for them I now came and acted like the reaper. I built up some strategic networks to other Linux admins here in my city, we coordinated our attack on the old infrastructure and helped each other out in the best way Linux cooperates and teaches to cooperate, while our competition only wanted personal profit, they stayed solo, we were an army, we wanted more than that. We wanted freedom.

Freedom for us and for our clients.

The term cyber warrior that I give myself formed around that time, I felt and I acted like I, we, were at war. And I wasn’t even looking too much on my personal profit doing it, I could live from it, done that, the rest wasn’t so important. Every generation has its own kind of war. My war was working on the destruction of Windows infrastructure, subverting it and undercutting the prices while delivering a better product.

And that is all in all what I did around 2000. I also finally ditched C++ for the reasons I now so much reject most aspects of OOP and how harmful they are to our infrastructure and our CPUs, who do not like atomized data, which was and is the cause of the bad software we were able so easily to replace.

With the upcoming mobiles this war ended; we have won.

We left our enemies in smoldering ruins on burned soil that never will harvest again. And we made damn sure of that.

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th

Where do I start?

I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits.

Here are the biggest mistakes people are making and how to fix them:

Not having a separate high interest savings account

Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it.

Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up.

Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix.

Overpaying on car insurance

You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance.

If you’ve been with the same insurer for years, chances are you are one of them.

Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving.

That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try.

Consistently being in debt

If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%.

Here’s how to see if you qualify:

Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify.

It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years.

Missing out on free money to invest

It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach.

Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus.

Pretty sweet deal right? Here is a link to some of the best options.

Having bad credit

A low credit score can come back to bite you in so many ways in the future.

From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it.

Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line.

How to get started

Hope this helps! Here are the links to get started:

Have a separate savings account
Stop overpaying for car insurance
Finally get out of debt
Start investing with a free bonus
Fix your credit

Profile photo for Tony Wallace

Other answers here say “not much”.

I disagree. In the old days programming consisted of two parts, the operating system, and the application language. If you sat down and read the (paper) COBOL manual from cover to cover, you knew what you could do and how. The same applied to programming on PCs. Paradox for example handled database and form design. You were working in a single environment with a single tool and that was typical.

Your typical modern application has separate systems for database, web server, page generation etc. Because of this programming is much more complex than it used to be.

Profile photo for Dale Strickler

In my mind, the notion of computer programming is the extension of process and information automation and efficiency that goes back centuries. When you look deeply enough many of the cool computer algorithms were developed by mathematicians long before computers existed.

The biggest change I have seen is the size of the program environments. At one time you wrote the WHOLE program and there were no

In my mind, the notion of computer programming is the extension of process and information automation and efficiency that goes back centuries. When you look deeply enough many of the cool computer algorithms were developed by mathematicians long before computers existed.

The biggest change I have seen is the size of the program environments. At one time you wrote the WHOLE program and there were no real libraries (maybe some basic math) that you used. Nowadays anything you write is in the context of massive amounts of other people's code. Code for the OS, code for the connectivity services, code for the GUIs, code for the ad services, the analytic engines, framework code, etc… Any modern program that does not depend on at least 10 pre-existing bodies of code would be an exception.

This brings me to a side note that I think ha...

Profile photo for Endre Enyedy

Well, I guess I will stir up a storm.

No. Computer Programming has not changed since its inception. It is still the same as it was at the beginning.

We still write a set of very specific commands in a given, logical sequence. That is computer programming.

Now, the tools we use to write have -thankfully improved a huge lot, many (not all) of them making our life supposedly a lot easier.

How much writing has changed over the last 60 years? Nothing. By and large, we use the same words, the same style and the same rules. (emails and texting still obey the same rules).

And the writing instrument? Those,

Well, I guess I will stir up a storm.

No. Computer Programming has not changed since its inception. It is still the same as it was at the beginning.

We still write a set of very specific commands in a given, logical sequence. That is computer programming.

Now, the tools we use to write have -thankfully improved a huge lot, many (not all) of them making our life supposedly a lot easier.

How much writing has changed over the last 60 years? Nothing. By and large, we use the same words, the same style and the same rules. (emails and texting still obey the same rules).

And the writing instrument? Those, yes, have changed. Sixty years ago most of the people wrote using ink and metal pens. Then cam the fountain-pens and ball pens. pencils, colored and mechanical pencils, markers, crayons, space pens and whatnot. Then we ended up with text processors and multi-functional editors.

But writing itself remains the same, the same as the mental process of creating a computer program, using a lot of modern tools.

Profile photo for Johnny M

I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”

He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”

He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:

1. Make insurance companies fight for your business

Mos

I once met a man who drove a modest Toyota Corolla, wore beat-up sneakers, and looked like he’d lived the same way for decades. But what really caught my attention was when he casually mentioned he was retired at 45 with more money than he could ever spend. I couldn’t help but ask, “How did you do it?”

He smiled and said, “The secret to saving money is knowing where to look for the waste—and car insurance is one of the easiest places to start.”

He then walked me through a few strategies that I’d never thought of before. Here’s what I learned:

1. Make insurance companies fight for your business

Most people just stick with the same insurer year after year, but that’s what the companies are counting on. This guy used tools like Coverage.com to compare rates every time his policy came up for renewal. It only took him a few minutes, and he said he’d saved hundreds each year by letting insurers compete for his business.

Click here to try Coverage.com and see how much you could save today.

2. Take advantage of safe driver programs

He mentioned that some companies reward good drivers with significant discounts. By signing up for a program that tracked his driving habits for just a month, he qualified for a lower rate. “It’s like a test where you already know the answers,” he joked.

You can find a list of insurance companies offering safe driver discounts here and start saving on your next policy.

3. Bundle your policies

He bundled his auto insurance with his home insurance and saved big. “Most companies will give you a discount if you combine your policies with them. It’s easy money,” he explained. If you haven’t bundled yet, ask your insurer what discounts they offer—or look for new ones that do.

4. Drop coverage you don’t need

He also emphasized reassessing coverage every year. If your car isn’t worth much anymore, it might be time to drop collision or comprehensive coverage. “You shouldn’t be paying more to insure the car than it’s worth,” he said.

5. Look for hidden fees or overpriced add-ons

One of his final tips was to avoid extras like roadside assistance, which can often be purchased elsewhere for less. “It’s those little fees you don’t think about that add up,” he warned.

The Secret? Stop Overpaying

The real “secret” isn’t about cutting corners—it’s about being proactive. Car insurance companies are counting on you to stay complacent, but with tools like Coverage.com and a little effort, you can make sure you’re only paying for what you need—and saving hundreds in the process.

If you’re ready to start saving, take a moment to:

Saving money on auto insurance doesn’t have to be complicated—you just have to know where to look. If you'd like to support my work, feel free to use the links in this post—they help me continue creating valuable content.

Profile photo for Patrick Tan

20 years ago. A developer is always full stack. Today, as many os platform rises. More programming languages rises. Clearer differences between front end and backend developers. They are blessed with alot of ready libraries to use. In the past, you got to build your own library. However, although same problem in the past too struggling to learn everything and catch up with trend. Today's challenge is worst as there are more to catch up too. Computer programming seems to have diluted as more start up companies are jumping in and roles were divided to be more focus and specific into a certain ar

20 years ago. A developer is always full stack. Today, as many os platform rises. More programming languages rises. Clearer differences between front end and backend developers. They are blessed with alot of ready libraries to use. In the past, you got to build your own library. However, although same problem in the past too struggling to learn everything and catch up with trend. Today's challenge is worst as there are more to catch up too. Computer programming seems to have diluted as more start up companies are jumping in and roles were divided to be more focus and specific into a certain area only for each. Computer programming has been more diversified for sure.

Profile photo for Sedat Kapanoglu

(Pasting here from my blog post)

Here are some changes I have noticed over the last 20 years, in random order:

  • Some programming concepts that were mostly theoretical 20 years ago have since made it to mainstream including many functional programming paradigms like immutability, tail recursion, lazy evaluating collections, pattern matching, first class functions and looking down upon anyone who don’t use them.
  • A desktop software now means a web page bundled with a browser.
  • Object-Oriented Programming (OOP) has lost a lot of street cred although it’s still probably the most popular programming model

(Pasting here from my blog post)

Here are some changes I have noticed over the last 20 years, in random order:

  • Some programming concepts that were mostly theoretical 20 years ago have since made it to mainstream including many functional programming paradigms like immutability, tail recursion, lazy evaluating collections, pattern matching, first class functions and looking down upon anyone who don’t use them.
  • A desktop software now means a web page bundled with a browser.
  • Object-Oriented Programming (OOP) has lost a lot of street cred although it’s still probably the most popular programming model. New trait-based programming models are more pervasive in modern languages like Go, Rust and Swift. Composition is preferred over inheritance.
  • You are not officially considered a programmer anymore until you attend a $2K conference and share a selfie from there.
  • Because of the immense proliferation of multi-processor CPUs, parallel programming is now usually supported at the programming language level rather than primitive OS calls of 20 years ago. It brought in asynchronous programming primitives (async/await), parallel coroutines like goroutines in Go language or channels in D, composability semantics like observables with reactive programming.
  • A pixel is no longer a relevant unit of measure.
  • Garbage collection has become the common way of safe programming but newer safety models are also emerging like lifetime semantics of Rust and snarky jokes in code reviews.
  • 3 billion devices run Java. That number hasn’t changed in the last 10 years though.
  • A package management ecosystem is essential for programming languages now. People simply don’t want to go through the hassle of finding, downloading and installing libraries anymore. 20 years ago we used to visit web sites, downloaded zip files, copied them to correct locations, added them to the paths in the build configuration and prayed that they worked.
  • Being a software development team now involves all team members performing a mysterious ritual of standing up together for 15 minutes in the morning and drawing occult symbols with post-its.
  • Language tooling is richer today. A programming language was usually a compiler and perhaps a debugger. Today, they usually come with the linter, source code formatter, template creators, self-update ability and a list of arguments that you can use in a debate against the competing language.
  • Even programming languages took a side on the debate on Tabs vs Spaces.
  • Adobe Flash, which was the only way to provide some smooth interaction on the web, no longer exists, thankfully. Now we have to develop on three different platforms with entirely different programming models in order to provide the same level of interaction.
  • IDEs and the programming languages are getting more and more distant from each other. 20 years ago an IDE was specifically developed for a single language, like Eclipse for Java, Visual Basic, Delphi for Pascal etc. Now, we have text editors like VS Code that can support any programming language with IDE like features.
  • Code must run behind at least three levels of virtualization now. Code that runs on bare metal is unnecessarily performant.
  • Cross-platform development is now a standard because of wide variety of architectures like mobile devices, cloud servers, embedded IoT systems. It was almost exclusively PCs 20 years ago.
  • Running your code locally is something you rarely do.
  • Documentation is always online and it’s called Google. No such thing as offline documentation anymore. Even if there is, nobody knows about it.
  • A tutorial isn’t really helpful if it’s not a video recording that takes orders of magnitude longer to understand than its text.
  • There is StackOverflow which simply didn’t exist back then. Asking a programming question involved talking to your colleagues.
  • People develop software on Macs.
  • Internet connectivity is the norm and being offline is an exception which is the opposite of how it was back then.
  • Security is something we have to think about now.
  • Mobile devices can now show regular web pages, so no need to create a separate WAP page on a separate subdomain anymore. We create mobile pages on separate subdomains instead.
  • We open source everything by default except the code that would really embarass us.
  • There are many more talented women, people of color and LGBT in the industry now, thanks to everyone who fought against discrimination. I still can’t say we’re there in terms of equality but we are much better.
  • Getting hacked is a regular occurence. Losing all your user data usually circumvented by writing a blog post that recommends changing passwords and that’s pretty much it. Apology isn’t required.
  • Working as a programmer remotely is easier than ever thanks to the new technologies like video conferencing, ubiquitous internet access and Keurigs.
  • We don’t use IRC for communication anymore. We prefer a bloated version called Slack because we just didn’t want to type in a server address.
  • We run programs on graphics cards now.
  • Your project has no business value today unless it includes blockchain and AI, although a centralized and rule-based version would be much faster and more efficient.
  • For some reason, one gigabyte is now insufficient storage space.
  • Because of side-channel attacks we can’t even trust the physical processor anymore.
  • A significant portion of programming is now done on the foosball table.
  • Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran. So numerical calculations basically take the same amount of time as they did 20 years ago.
  • Creating a new programming language or even creating a new hardware is a common hobby.
  • Unit testing has emerged as a hype and like every useful thing, its benefits were overestimated and it has inevitably turned into a religion.
  • Storing passwords in plaintext is now frowned upon, but we do it anyway.
Profile photo for Metis Chan

With today’s modern day tools there can be an overwhelming amount of tools to choose from to build your own website. It’s important to keep in mind these considerations when deciding on which is the right fit for you including ease of use, SEO controls, high performance hosting, flexible content management tools and scalability. Webflow allows you to build with the power of code — without writing any.

You can take control of HTML5, CSS3, and JavaScript in a completely visual canvas — and let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off

With today’s modern day tools there can be an overwhelming amount of tools to choose from to build your own website. It’s important to keep in mind these considerations when deciding on which is the right fit for you including ease of use, SEO controls, high performance hosting, flexible content management tools and scalability. Webflow allows you to build with the power of code — without writing any.

You can take control of HTML5, CSS3, and JavaScript in a completely visual canvas — and let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off to developers.

If you prefer more customization you can also expand the power of Webflow by adding custom code on the page, in the <head>, or before the </head> of any page.

Get started for free today!

Trusted by over 60,000+ freelancers and agencies, explore Webflow features including:

  • Designer: The power of CSS, HTML, and Javascript in a visual canvas.
  • CMS: Define your own content structure, and design with real data.
  • Interactions: Build websites interactions and animations visually.
  • SEO: Optimize your website with controls, hosting and flexible tools.
  • Hosting: Set up lightning-fast managed hosting in just a few clicks.
  • Grid: Build smart, responsive, CSS grid-powered layouts in Webflow visually.

Discover why our global customers love and use Webflow | Create a custom website.

Profile photo for Magosányi Árpád

There are some advancements of the field, but programming is still not a profession in the sense that you have the rules of profession, stick to them and you cannot do major screwups. I mean the fundamentals are there, but no agreement on the rules of profession, and very few programmers actually follow even those fundamentals.

The most important thing discovered is Test Driven Development (TDD). This is a completely different approach to coding, making sure that you don’t have to keep so many states in mind, you can easily change your code, and giving opportunity to document it in detail.

Anoth

There are some advancements of the field, but programming is still not a profession in the sense that you have the rules of profession, stick to them and you cannot do major screwups. I mean the fundamentals are there, but no agreement on the rules of profession, and very few programmers actually follow even those fundamentals.

The most important thing discovered is Test Driven Development (TDD). This is a completely different approach to coding, making sure that you don’t have to keep so many states in mind, you can easily change your code, and giving opportunity to document it in detail.

Another very important thing is Clean Code, which makes the code understandable, which is crucial for continuing development and also to have less errors.

Besides those, there are the patterns, which are some standardized solutions to usual problem types.

The tool support is also much better. IDEs can fix usual problems and do refactors for you. This is partly made possible by advancements in programming languages: they have more expressive power and converge towards being strictly typed, object oriented and functional.

There are advancements in managing the process of software development as well. Agile methodologies deal with the fact that you just cannot get a big chunk of software right at the first try, and Agile Architecture gives a method to design software in a structured manner, and also to make sure that documentation stays up to date.

I believe that we are near to discover and agree on the rules of profession. Kode Konveyor works on exactly that. We started with TDD, Clean Code, and Agile Architecture, and based on them we came up with an implementation pattern which means that a coder can meaningfully contribute to a project immediately (as opposed to the current state of the art where it will take weeks or months), and coding itself is parallellizable. This also hugely reduces the skills needed for coding, as makes design-related activities as part of other professions.

Profile photo for Wayne Wilhelm

“How is the notion of 'computer programming' different today than it was 20 years ago?”

Nearly 40 years ago, I got out of the computer field (as a programmer) thinking programmers would be obsolete in the sense that, I believed that with the advent of the personal computer, that everyone would be doing their own programming.

I was wrong. Having exceptional skills, I viewed everyone else thinking they would eventually be able to do what I was doing. Such simply isn’t the case. Most people in the 1980’s had no idea what computer programming is. Today is no different.

Is the notion of computer progr

“How is the notion of 'computer programming' different today than it was 20 years ago?”

Nearly 40 years ago, I got out of the computer field (as a programmer) thinking programmers would be obsolete in the sense that, I believed that with the advent of the personal computer, that everyone would be doing their own programming.

I was wrong. Having exceptional skills, I viewed everyone else thinking they would eventually be able to do what I was doing. Such simply isn’t the case. Most people in the 1980’s had no idea what computer programming is. Today is no different.

Is the notion of computer programming different today? No.

Profile photo for Grammarly

Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluen

Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluency.

Regular feedback is powerful because writing in a language that isn’t the first one you learned poses extra challenges. It can feel extra frustrating when your ideas don’t come across as naturally as in your primary language. It’s also tough to put your writing out there when you’re not quite sure if your grammar and wording are correct. For those communicating in English in a professional setting, your ability to write effectively can make all the difference between collaboration and isolation, career progress and stagnation.

Grammarly Pro helps multilingual speakers sound their best in English with tailored suggestions to improve grammar and idiomatic phrasing. Especially when you’re writing for work, where time often is in short supply, you want your communication to be effortless. In addition to offering general fluency assistance, Grammarly Pro now includes tailored suggestions for writing issues common among Spanish, Hindi, Mandarin, French, and German speakers, with more languages on the way.

Features for all multilingual speakers

Grammarly’s writing suggestions will catch the most common grammatical errors that multilingual speakers make in English. For example, if you drop an article or misuse a preposition (such as “on” instead of “in”), our sidebar will flag those mistakes within the Fix spelling and grammar category with the label Common issue for multilingual speakers. Most importantly, it will provide suggestions for fixing them. While these errors seem small, one right after another can make sentences awkward and more difficult to absorb. Eliminating them all in one fell swoop is a powerful way to put a more fluent spin on your document.

Features for speakers of specific languages

With Grammarly Pro, speakers of French, German, Hindi, Mandarin, and Spanish can get suggestions specifically tailored to their primary language, unlocking a whole other level of preciseness in written English. For speakers of those languages, our sidebar will flag “false friends,” or cognates, which are words or phrases that have a similar form or sound in one’s primary language but don’t have the same meaning in English.

But now Grammarly Pro’s writing suggestions will catch these types of errors for you and provide suggestions on how to fix them. You can find these suggestions in the Sound more fluent category in our floating sidebar. Simply click on the suggestion highlighted in green, and voila, your English will be more polished and accurate.

PS: Tailored suggestions for other language backgrounds are on the way!

Profile photo for Petar Prvulović

20 years ago was 2000. We had internet back then, and many of languages from back then are still in use, and new ones are not revolutionary, just better at making some things easier. We still write code, compile, upload on servers, store in sql databases, files, use http and windows etc.

What’s significantly different is how much we rely on online resources, code repositories and documentation and support like stackoverflow. A lot of libraries used are never downloaded, but linked/imported directly from online repositories, especially in js ecosystem, and looking for answers/copypasting code is

20 years ago was 2000. We had internet back then, and many of languages from back then are still in use, and new ones are not revolutionary, just better at making some things easier. We still write code, compile, upload on servers, store in sql databases, files, use http and windows etc.

What’s significantly different is how much we rely on online resources, code repositories and documentation and support like stackoverflow. A lot of libraries used are never downloaded, but linked/imported directly from online repositories, especially in js ecosystem, and looking for answers/copypasting code is easier than ever.

So, mostly operational things. Programming itself is pretty much the same.

Profile photo for Scott Tunstall

Back in 1998, Agile development wasn’t really a thing back then like it is now. We had heard about Extreme Programming (XP) back then but nobody in Scotland had adopted it. Waterfall was the main development approach. Working in small teams of 3–4 people, you’d be given a target date to release the product and you worked to get it out the door by that date, even if you had to work evenings.

As far as tools went, if you were a Microsoft man you had either Visual C++ 6 or Visual Basic 6 to choose from for Windows development. If you were a Borland man you had Delphi and C++ Builder which I liked

Back in 1998, Agile development wasn’t really a thing back then like it is now. We had heard about Extreme Programming (XP) back then but nobody in Scotland had adopted it. Waterfall was the main development approach. Working in small teams of 3–4 people, you’d be given a target date to release the product and you worked to get it out the door by that date, even if you had to work evenings.

As far as tools went, if you were a Microsoft man you had either Visual C++ 6 or Visual Basic 6 to choose from for Windows development. If you were a Borland man you had Delphi and C++ Builder which I liked a lot. Proper drag and drop with C++.

To develop a Windows application with Visual C++ you had the choice of using raw Win32, MFC or ATL.

MFC was a big bloated elephant that included just about everything. You had to #define WIN32_LEAN_AND_MEAN to tell it to behave itself :)

Win32 - good luck with creating dialogs and windows procedures.

ATL was a nice compromise but I think people preferred VB for forms apps.

Unlike C# and Java applications today, compiling even the smallest application in C++ would take absolutely ages (and, when I’m working with MAME these days it still takes ages) so for rapid application development people were jumping on the VB6 and Delphi wagons. The effects are still felt today - until recently, I was still occasionally dealing with legacy code (badly) written in Basic and Pascal. There’s more of it around than you might think.

20 years ago Intellisense was relatively new. Your autocomplete wasn’t as smart as it is now. The Power Pup was in Microsoft Office and we abused it really badly by making it spout expletives via OLE automation.

Code analysis tools? You had linters for C++ but they didn’t integrate into the IDE, you had to run them from command line. Resharper didn’t exist. FXCop? Nope. Visual Basic only had Option Explicit and Compile as far as quality checks go.

Unit tests weren’t a thing back then so you created test harnesses to exercise your code. (Sometimes I still do create test harnesses.)

As far as learning resources we had the Microsoft Developer Network (MSDN) which was a decent enough reference but no Stack Overflow. Usenet was a thing so we’d post questions on comp.lang.C++ . And, like Stack Overflow these days, we’d end up arguing with some incel who thinks he’s better than you because he knows intricacies of smart pointers and you don’t.

Profile photo for Ashutosh Singh

The programming languages popular in those days were:

  1. Basic
  2. FORTRAN
  3. C
  4. Assembly

All these were structured programming languages. There was nothing like object oriented programming, Java and C++ was not there. Because of which program size were not large. There was little code reuse. Most programming activities were limited only to scholars of computer science. Since oops wasn't then, programming was mainly mathematical and Scientific programming. There was very little of web development, until 2000 know as the dotcom era.

Profile photo for Sandy Perlmutter

When I started, there were two or maybe three cultures. The Business culture used the largest available IBM hardware and the COBOL language (with a little PL/1 and Fortran, also some Assembler). There was also Univac hardware and a few others. You needed to know that system really well to work in it. To begin with, it used cards. Eventually terminals gave you access to your code that was originally on cards.

The Academic culture used the DEC PDP-11, for which Unix and C language were developed. Most of you are familiar with that. If you worked in this system, that is what you worked in. Because

When I started, there were two or maybe three cultures. The Business culture used the largest available IBM hardware and the COBOL language (with a little PL/1 and Fortran, also some Assembler). There was also Univac hardware and a few others. You needed to know that system really well to work in it. To begin with, it used cards. Eventually terminals gave you access to your code that was originally on cards.

The Academic culture used the DEC PDP-11, for which Unix and C language were developed. Most of you are familiar with that. If you worked in this system, that is what you worked in. Because C and the Shell were insanely terse, you could work on this system using a “teletype” terminal rather than cards. I cried. Then trained my team.

There was also a small business culture with some wonky systems I worked on occasionally. One included a 1401 Autocoder emulator. Loaded from cards.

I actually worked on a system where the Unix was the development environment for the IBM COBOL system, and there was a link between them — the mainframe thought the Unix system was a card reader and printer. A Yuge Kluge but it was all ours.

Cards, oh cards. I always had a box of them in my bottom drawer in case we ran out of them. You had to know several types of keypunch machines. Univacs were the best.

And I wrote an assembler in PL/1 in grad school, on cards.

When I worked in a Univac shop, we laughed about IBM Job Control Language (JCL). When I moved to an IBM shop, I had to learn it. Not funny. More cards. Although we had terminals, what you were looking at was cards. (Under the hood, there were cards. I had to load CICS and some other deep operating system stuff, and you loaded JCL etc from a disk.)

We were brave, we conquered, but it was SLOW. Some places you could get one shot a day. You had to concentrate on getting all the diagnostics out in one shot so you could start testing and debugging. One nice thing is that you could get into your one or two languages pretty deeply, read dumps, and optimize your code. You didn’t need to learn two more languages each year. We did have some hairy utilities like SYNCSORT and some hairy disk management systems like VSAM.

Profile photo for Ray Gardener

Our ability to care. I don’t know if this is a universal issue, but it's been the case at some of the places I've been.

There's been press about buggy software. Apple's iOS 13 update, for example, got attention for being perhaps the buggiest in its history. It seems odd given Apple's history of quality, attention to detail, and simply that iOS has been under development for many years — the process should have been ironed out long before now.

When I was at Corel in 1995, project builds were not allowed to show compiler errors, not even warnings. Despite a project containing over a million lines

Our ability to care. I don’t know if this is a universal issue, but it's been the case at some of the places I've been.

There's been press about buggy software. Apple's iOS 13 update, for example, got attention for being perhaps the buggiest in its history. It seems odd given Apple's history of quality, attention to detail, and simply that iOS has been under development for many years — the process should have been ironed out long before now.

When I was at Corel in 1995, project builds were not allowed to show compiler errors, not even warnings. Despite a project containing over a million lines of code, the build had to start and stop without a single such message. If any appeared, the relevant person(s) were contacted and the matter promptly resolved. Similar attention was paid to build breaks and runtime errors. Management was technical and quality was taken seriously. Software was still being distributed in boxes, which of course pressured vendors to get things right the first time.

One imagines a lack of skill leads to bugs — and it can, and does — but more and more I'm seeing that it's apathy. And apathy starts small:

Me: These symbols should start with an uppercase letter, as per the coding guidelines.

Dev: I guess. But it's just semantics, doesn't affect anything.

Me: Well, it affects readability.

Dev: It still reads the same. Case makes no difference.

Me: Well, that's not really the point. Why bother having standards if —

Dev: I'm not changing it.

The thing about apathy, though, is that it spreads:

Me: The build always shows a lot of compiler warnings.

Dev: Yeah, but the code runs. But if you care, fix it yourself.

Me: But it's not in my area, it's in yours.

Dev: Whatever. I'll file it under 'todo someday.'

And like a disease, apathy can spread to worrisome heights:

Me: This code is really byzantine.

Dev: Yeah, it's been around for years. It accumulates cruft.

Me: Okay, but it's dangerous. It's slowing down our ability to make changes, and changes will probably introduce bugs.

Dev: It's working for now. I'm not gonna touch it.

I guess it's wonderful that with high bandwidth and cloud services, vendors can ship updates anytime, even knowingly release substandard software since it can be patched over the wire later. But what began as a convenience evolved into a habit.

And this is not without harm — on top of making projects late/buggy/overexpensive, it costs us our attention to detail, our focus on quality, our unspoken contract to exchange working goods for customers' money. It erodes trust, invalidates our professionalism, and ultimately cheapens our spirit.

The less we push our projects to be great, the more we deny ourselves to be great.

Profile photo for Mario Burgos

I started when I was 37, way, way back. My background? Journalism, professional photographer, comedy writer. My college? and unfinished degree in Philosophy and Literature and, no English.

After running away from my country I tried cabinet making in New York. No luck. While working as a photo lab technician, I bought a TI994A home computers by Texas Instrument and promoted by, (Do you know this guy?) Bill Cosby. I started programming video games in BASIC just by following the manual, but they were too slow. So, I called Texas Instrument and they told me that Assembly Language would improve the

I started when I was 37, way, way back. My background? Journalism, professional photographer, comedy writer. My college? and unfinished degree in Philosophy and Literature and, no English.

After running away from my country I tried cabinet making in New York. No luck. While working as a photo lab technician, I bought a TI994A home computers by Texas Instrument and promoted by, (Do you know this guy?) Bill Cosby. I started programming video games in BASIC just by following the manual, but they were too slow. So, I called Texas Instrument and they told me that Assembly Language would improve the speed of my games dramatically. The Assembly Language Kit was only 44.95. So I bought it and for the first time in my life I read a book, the Assembly Language manual, from cover to cover, without understanding one single word. The feeling was devastating but I continued working with the computer and eventually started taking courses and reading book after book.

14 years later, I was the director of software development in one of the biggest Insurance Companies. I also worked for Dow Jones, BMW, etc. After I retired, I went back to the school and got my Bachelor’s Degree in Management Information Systems at 68, and my master in science in Software Engineering at 72.

Now, back to your question. At 31, you have everything to be successful. I am 73 now, retired and can’t wait to start something new. I want to be a woodworker and in about six months I will have my shop with all the tools that I think I will need. I am also writing a book about democracy and have other projects in mind.

The Bible says that Noah was about 600 years old when God asked him to build an ark for a curious project. He was a preacher, not a construction worker and, he did it.

By comparison, at 73, am still a baby.

Profile photo for Quora User

The code hasn’t changed much beyond a bit of melding and syntactic sugar, however the landscape of development in my opinion is remarkably different:

  1. Git (and distributed VCS) in widespread usage
  2. A deluge of freely available world class compilers and development tools
  3. Github & friends
  4. Continuous Integration via remote services
  5. Distributed compilation on demand
  6. Test Driven Development and the use of agile practices in development
  7. Docker
  8. LLVM (which can enable software plugins written in any language)
  9. Editors that rival some IDEs from 20 years ago
  10. More widespread JIT compilation
  11. Type hinting to offer many

The code hasn’t changed much beyond a bit of melding and syntactic sugar, however the landscape of development in my opinion is remarkably different:

  1. Git (and distributed VCS) in widespread usage
  2. A deluge of freely available world class compilers and development tools
  3. Github & friends
  4. Continuous Integration via remote services
  5. Distributed compilation on demand
  6. Test Driven Development and the use of agile practices in development
  7. Docker
  8. LLVM (which can enable software plugins written in any language)
  9. Editors that rival some IDEs from 20 years ago
  10. More widespread JIT compilation
  11. Type hinting to offer many static typing benefits to dynamically typed languages
  12. Functional reactive UI frameworks

The philosophies haven’t changed, but when you read books like The Mythical Man Month and Code Complete, many of the suggestions made then are now commonly used practices supported through frameworks and innovations in language design and runtime environments.

There are also many games and applications that took months for a team to build in the 90s that can be done by a single programmer in a matter of days or weeks.

Containerization technologies such as Docker enable application logic to be self contained in an image targeting a single platform, offering its functionality via rest APIs and / or shared files.

Profile photo for Ian Joyner

It depends on the programmer. Bad programmers have been with us throughout history. They ignore the better techniques because they are hard to understand at first. The best programmers were 60 years ago who developed structured programming (ALGOL), type systems (Pascal), object-oriented programming (Simula, Smalltalk, Eiffel), and functional programming (McCarthy, LISP).

Unfortunately, much of that has got lost in the quagmire. C adopted structured programming syntax from ALGOL (via BCPL), but greatly compromised on SP with the wrong compromises (devolving to FORTRAN and assembler, the very thi

It depends on the programmer. Bad programmers have been with us throughout history. They ignore the better techniques because they are hard to understand at first. The best programmers were 60 years ago who developed structured programming (ALGOL), type systems (Pascal), object-oriented programming (Simula, Smalltalk, Eiffel), and functional programming (McCarthy, LISP).

Unfortunately, much of that has got lost in the quagmire. C adopted structured programming syntax from ALGOL (via BCPL), but greatly compromised on SP with the wrong compromises (devolving to FORTRAN and assembler, the very things we were trying to get away from).

Those people did not understand type systems and hated and disparaged Pascal and all its successors.

C++ attempted to lift C’s game by including stronger types based around classes, but C++ compromised typing and OO in the wrong ways. Programmers miss the real points of SP and OO. Not only that but C and C++ have given programmers an arrogant attitude that understanding the computer is more important than understanding computation. This is the cart before the horse.

Functional programming so far has not had a big compromise (although C++ is working on that).

We need to get back to the foundations and principles and their clean application without compromises the devolve us to the things we were trying to avoid.

Here is what we are trying to avoid:

Profile photo for Shubhranshu Tiwari

A Rikshaw Puller from a small city who would have hardly been in any schools shares memes to others on WhatsApp

My mamma shouts on my father (LOL) because he uses phone late till 12 AM because of Youtube. She says, “Aapka to alag hi chal rha hai, jawani fir se aa gayi hai?” - and we used to laugh.

TLDR; For the climax story, read below first. But reading whole is worth.

Hey, how does the above statements answer the question?

Ummmmm…..

This happened because of computer programming, Android became so powerful, its applications became so powerful, and here comes one of the most popular app of the deca

A Rikshaw Puller from a small city who would have hardly been in any schools shares memes to others on WhatsApp

My mamma shouts on my father (LOL) because he uses phone late till 12 AM because of Youtube. She says, “Aapka to alag hi chal rha hai, jawani fir se aa gayi hai?” - and we used to laugh.

TLDR; For the climax story, read below first. But reading whole is worth.

Hey, how does the above statements answer the question?

Ummmmm…..

This happened because of computer programming, Android became so powerful, its applications became so powerful, and here comes one of the most popular app of the decade.

WhatsApp: From a 5 year old to 80 year old uses this.

What changed, see below

———————————————————————
“YOU KEEP THE WORLD IN YOUR POCKET NOW”
———————————————————————

[ =======> Shop the world

Buy house hold things (Ration, Daily use stuffs) from — Grofer, Big Basket..

Get a jeans, shirt, shoes, anything — Flipkart, Amazon, Myntra….

Buy home appliances, books — Flipkart, Amazon..

Chai peeni hai? - Chayoos, Chai Point, Chai-Sutta

[ =======> Pet-Pooja (Food)

No one goes to dairy for milk — Milk Basket..

Get cheeper food from the same restaurent at even 3 AM — Zomato, Swiggy..

Get healthy food from — cure.fit..

[ =======> Ghumakkad (Travel)

Want to visit local area, no need to ask auto, taxy “Bhaiya kitna lenge?” Ola, Uber

Going alone with less money, rent a bike Rapido, Uber Moto, Ola Bike, Dunzo

Wanna travel long distance, book a bus Redbus, Make My Trip, Goibibo

No dhakka-mukki (hustle) in Booking window queue for reservation IRCTC

Wanna fly high in the sky, book a flight from Goibibo

[ =======> Be healthy

No need to ask your friends for a good doctor — Practo (In few city)

Get your medicines at your door step — 1mg, NetMeds, Pharmeasy

Get your blood test at your home — 1mg

[ =======> Games & Fun

Play game with your friends living 100 miles apart — PUBG, Counter Strike, LUDO….

Watch TV at your phone -- Hotstar, Prime Video, Netflix, Airtel TV, Jio TV, TVF

[ =======> Rishtey Naatey (Relations)

Share your feelings in society — FaceBook, Instagram, Twiter

Make videos and be a star — TikTok

[ =======> Know more

Dont ask station master if train is late, leave home late — Where is my train?

What is this? What is that? — Ask from google

Which road goes to Delhi? -- Google map

Baarish kab hogi (When it’s gonna rain) — Google Weather


Meanwhile COMPUTER PROGRAMMING be like - I am the boss

Hahahahahah….. But

KADWA SACH (Bitter Truth)

People are now at distance from heart.

Kids dont come out at evening to play in dirt becuase of PUBG,

People just whatsApp, dont meet others.

Teens now propose girls in TikTok

Social media can promote even a bad person.

Memers go into anyone’s personal life - huhhh

Keep loving everyone, dont be apart, live together, The technology is from you, you are not from technology!!!

Thanks…
Keep Loving & Keep Programming

Profile photo for Steven Van Loon

Almost all of it. The same algorithms come back in different implementations. Those implementations are better suited, so we drop one technology for a different technology which reimplements the same algorithms.

For example, remote procedure calls were developed by 1981. It’s epitomized by client/server architecture (like Microsoft Foundation Classes) from the 1980’s mid-1990’s. At which time, the Java RPC starts redefining it with Java specific ways. Microsoft helps open it up with Web Services and we start using regular HTTP calls. However, it’s too bulky and too much overhead. So, REST is cr

Almost all of it. The same algorithms come back in different implementations. Those implementations are better suited, so we drop one technology for a different technology which reimplements the same algorithms.

For example, remote procedure calls were developed by 1981. It’s epitomized by client/server architecture (like Microsoft Foundation Classes) from the 1980’s mid-1990’s. At which time, the Java RPC starts redefining it with Java specific ways. Microsoft helps open it up with Web Services and we start using regular HTTP calls. However, it’s too bulky and too much overhead. So, REST is created to take more advantage of internet routing. REST with json doesn’t compress well, so Google develops ProtoBuffers, which is fast and reliable. It’s similar to the electronic data formats of the 1960s and 1970s (aka EDI). The latest craze is to machine learning with lots of data, but now security, compression, internet routing are required. That leads us into GRPC, which is the current state of the art for RPC. Conceptually, it’s identical to what was happening in the 1970’s, but it takes advantage of the change of technology since.

The implementations change, but the algorithms/design patterns remain the same.

Epilogue:

The Art of Computer Programming will have to be updated, but fundamentally every thing is right. That said, many new algorithms have and will continue to be invented. Machine Learning algorithms and Quantum Computer algorithms are clever synthesis of the old and new. If we do it right, then humans will have to do less work to update the algorithms.

Profile photo for Ian Joyner

Well, first of all computers were people. In WWII, people like Alan Turing worked out how to crack the Enigma code and they would laboriously work out the key for the day.

The people who were computers were WRENS (women in the Royal Navy). Maybe I should go back even earlier to say that Ada Lovelace (Lord Byron’s daughter) was the first programmer and she programmed Charles Babbage’s difference engine. This was a mechanical device (and if you are in London, you can see Babbage’s programmable machines in the Science Museum). I knew some of his descendants and Mrs Babbage managed to burn down her

Well, first of all computers were people. In WWII, people like Alan Turing worked out how to crack the Enigma code and they would laboriously work out the key for the day.

The people who were computers were WRENS (women in the Royal Navy). Maybe I should go back even earlier to say that Ada Lovelace (Lord Byron’s daughter) was the first programmer and she programmed Charles Babbage’s difference engine. This was a mechanical device (and if you are in London, you can see Babbage’s programmable machines in the Science Museum). I knew some of his descendants and Mrs Babbage managed to burn down her kitchen three times in one year - it was quite an advanced algorithm!

Anyway, back to Bletchley park (as in my icon you can see here). Actually deriving the day’s key to crack the code was difficult and time consuming and a way was needed to do it mechanically. The Polish had developed a Bombe machine, and Turing improved on it to crack the keys in a timely fashion.

This is an example of programming by hard wiring. This is not really programming as we know it now - programming by hardware, rather than by software.

“Timely fashion” is important because time is the only resource that we really care about – all other computing resources are there so we can do computations in a timely fashion. Cracking the Enigma code was important to do quickly, hence the electronic Bombe was developed. You can see this in the recent “Imitation Game” movie – I highly recommend it, as well as a visit to Bletchley Park and the National Computing Museum next door (although that has more to do with the evolution of computer hardware rather than evolution of programming).

Then at Bletchley Park, Tommy Flowers developed the Colossus machine, which was programmable. These machines were programmable by patch panel (I think, details are still sketchy on Colossus, even though you can see a reconstruction at Bletchley Park).

But programming by patch panel was slow and error prone, requiring maybe a whole day to enter a program to be run once. So a way was developed to put programs into memory and John Von Neumann put programs into the same memory as data. This has proven not to be very secure and it is a well-known ‘rule’ to keep programs and data apart (program code can then be reentrant, which is a big advantage).

Another model is to have separate memories for code and data. This is known as Harvard Architecture as opposed to Von Neumann architecture, and it avoids the “Von Neumann bottleneck” where code and data must share the same data path between memory and the processor. But the problem is predicting how much memory you need for programs and how much for data.

This problem can be avoided by introducing a 1 bit flag to distinguish – thus programs cannot overwrite program memory. This is actually much more secure (and would prevent most viruses and worms). Burroughs actually championed those ‘tagged’ architectures and they are very secure and avoid many classes of other well-known software defects (bugs).

So, the question now is, how do we get those programs in memory? First of all, the instructions were worked out and the bits toggled tediously into memory. Well it was faster than patch panels, and you could punch the programs onto tape for reuse. (Reusability has become a big topic in programming.)

Then someone had a bright idea that we could write a program that would take symbolic instruction codes and generate the bit-pattern programs themselves. But even better to form loops which required jump instructions, labels could be used and the addresses worked out by the software – called an assembler (no one seems to know why), and memory locations to store data could also be named and again addresses automatically generated. Programmers did not have to know about or manipulate memory addresses anymore – that is a BIG clue!

Linkers were then introduced so that addresses were resolved, and loaders so that programs could be loaded at different base addresses. But even they are not needed when relative addressing is used (but that was complained about as wasting CPU cycles.)

But, mathematical notation was still much clearer saying x := y + z (ok, assignment := is not a part of mathematics, which uses = for equals, something which is much different to assignment, do not confuse the two like inferior programming languages do!)… anyway x := y + z is easier than LOAD Y to R1, LOAD Z to R2, ADD R1, R2 to R3, STO X,R3 ugh! (or in a stack machine it would be much simpler but still a pain LOAD Y, LOAD Z, ADD, STO X (none of those pesky registers which are a bad idea).

So, John Backus and others invented FORTRAN (FORmula TRANslator – from mathematical formulas of course!). But then came a syntax language called BNF, for Backus-Naur (or Normal) Form. This allowed language syntax to be expressed as context-free (read simple and free from other defects of context-dependent languages) grammars. This was based on the work of Noam Chomsky (yes that Noam Chomsky who rightly believes universities should be free and not put students into self-perpetuating debt for the rest of their lives, just to feed the mind-numbing commercial system).

So, Backus and others wanted to rid the world of the curse of FORTRAN (one of those languages that confused = with assignment) and invented the most breathtaking step in programming – ALGOL. ALGOL also introduced recursion, which was forbidden in FORTRAN due to its static way of allocating local variables (in common memory and not on a stack). But already, FORTRAN had a religious and cult following and the word recursion was not allowed to be used in the original ALGOL specification so as not to scare off the FORTRAN addicts.

ALGOL was a really clean and general design, and maybe still one of the best designed languages to date. (As Tony Hoare noted, most of ALGOL’s successors were significant backwards steps.)

ALGOL was the first language (with some extensions) to be used to program a whole operating system and related system software in the Burroughs (yes them again) B5000 designed by Bob Barton and team. Barton envisaged a machine designed by programmers for programmers, not by electronics engineers so programmers had to forevermore have to work out how the hell to program the things.

Barton B5000

In the same way that ALGOL was clean and general, the B5000 was a clean and general architecture based on the concepts of ALGOL (even the FORTRAN implementation was recursive). Procedure calls were actually efficient and fast. On other systems it was recommended not to use procedure calls because they were too inefficient. Hence programmers had to bend their programs to make them work efficiently if at all. The point of Burroughs was not to do that, but to write programs logically towards the problem domain, not the machine domain.

Now remember, I said, that the only resource we are ultimately interested in is time – not CPU speed, not memory size – you only increase these to make a computation go faster. In fact, just like at Bletchley Park where they started using electronics for speed, electronic computers are only good because they are fast – blindingly fast (only quantum computers will be faster – blindingly much faster!)

Thus a lot of programming research has been into algorithms that make it possible to do certain classes of computation. These are the tractable class. The intractable class are not impossible – just we can't think of ways of doing them for reasonably-sized data sets that will complete before the life of the universe pegs out (or at least we just lose interest before we get an answer).

Now some people think we need languages that save a few processor cycles here or there – but these are irrelevant compared to electronic speed and algorithm speed. It is irrelevant because those languages sacrifice things such as security and correctness to save processor cycles (have I mentioned C yet? no, I suppose I should as one of those backward step successors to ALGOL).

Let’s go back to just before ALGOL. The principles of structured programming were worked out. Certain simple, but dangerous things were to be avoided. In control flow that was jumps and gotos. Rather we had the one entry and one exit to any structure. These structures were sequence, conditional, and loops (and recursion). These all flowed to well-known places, whereas goto could fo anywhere and result in programs that were very hard to follow. For this see Dahl, Dijkstra, and Hoare’s book “Structured Programming” ([PDF] Structured Programming, Dahl, Dijkstra, Hoare, Academic Press 1972 - Free Download PDF).

But it is not just gotos – it is also addressing and address manipulation. Remember how I said one of the significant things in assembler was that addresses were automatically handled, well true high-level languages go a step further and all memory allocation and deallocation should be done automatically and not handled by programmers. Even memory management at garbage collection should be automatically handled, so that programmers don’t have to worry about these fiddly and error-prone details. This also precludes the pointers and pointer manipulation of several languages – they are a really bad idea.

Next step – we wanted ways to organise larger programs. So we got types and objects. Objects encapsulate the program with the data (not in the above sense of program and data memory of Von Neumann and Harvard architectures). This is so operations against certain data types make sense. For instance we drink a glass of water and switch on a computer. We don’t drink a computer and switch on a glass of water – such nonsense should be disallowed by a compiler and a language specified to do so.

Objects and modules (work of David Parnas) are not only a good way to organise software but they form a mathematical way of thinking about software. Thus data is only accessed via a published interface (this is also good for concurrency and distributed computing), not via backdoors like pointers (remember I said pointers are a really bad idea – that’s because pointers ARE are REALLY BAD idea).

Now some people want integrated testing, and that is a good idea. So Test-Driven Development (TDD) is now an industry buzz word. Good for what it’s worth, but really doesn't go far enough. You want testing built right into the software and you can do that with interfaces. There are two checks we can do with interfaces – static, that is checks done by a compiler so that we don’t have to do it at run time (this is the holy grail of testing, but so far is not completely achievable). Languages use types to do this kind of checking (so you can’t drink the computer).

But we also need to fall back on dynamic tests for those we can’t statically test for, such as out of bounds access, etc. Not only do these tests help with program correctness, but with security as well. And security is the next step of program evolution because the industry is currently in crisis over this.

So, if TDD does not go far enough, what does. It is Design by Contract (DbC). This was championed by Bertrand Meyer who directly implemented it in his language Eiffel.

Design by contract - Wikipedia

Now I should also mention Smalltalk and Alan Kay, also significant steps in computing and object-oriented programming (a term that Alan Kay introduced, but he later thought message-based programming was more significant).

(Notice I have only mentioned C once, and I have save C++ until now. These are widely used languages, but they really introduce complexity that should not be there. Programming has always evolved towards simplicity. Simplicity tames complexity – complexity cannot be tamed by more complexity, that just makes the situation worse, and so you can see what I think of the languages that I reserved mentioning until the end. But a lot of people in this industry don’t understand this, and hence some languages have gone in completely the wrong direction.)

Oh, yes I get to the end and realise I have not even mentioned abstraction, so have missed one of the most important concepts. Note abstraction means simplification, not complication or obscurity (not abstract art, although abstract art is about simplification actually). Abstraction is considering detail at the correct level, but keeping detail at that level and not allowing it to higher levels. Thus as I said, assembler language originally did away with the detail of addressing – and this resulted in a whole lot of advantages. Higher-level languages do away with the detail of memory allocation and deallocation altogether. So abstraction is an essential concept in the evolution of programming – but a lot of people misunderstand it.

(Maybe this has become my longest answer ever. I’m sure I’ll revisit it and do some editing.)

Profile photo for John L. Miller

My answer is a qualified "yes, coding is easier today." In fact, it's so much easier to attain a given (relatively complex) result that it's not even worth comparing the two.

In 1985 the 6 mhz IBM Personal Computer/AT was state of the art for microcomputers, as was the Apple IIe and the first incredible version of the Apple Macintosh. Most programs rendered text or simple ASCII-art style graphics. Most games were incredibly simple (think centipede, or adventure games with a picture and four choices at each step of play). It took an insane amount of work to get these results, dealing with a pr

My answer is a qualified "yes, coding is easier today." In fact, it's so much easier to attain a given (relatively complex) result that it's not even worth comparing the two.

In 1985 the 6 mhz IBM Personal Computer/AT was state of the art for microcomputers, as was the Apple IIe and the first incredible version of the Apple Macintosh. Most programs rendered text or simple ASCII-art style graphics. Most games were incredibly simple (think centipede, or adventure games with a picture and four choices at each step of play). It took an insane amount of work to get these results, dealing with a processor that was perhaps as slow as 1 mhz, 64 kb of RAM, an 80 column screen if you were lucky, and amazing 640 x 480 graphics.

64 kb of RAM, and for almost everyone, floppy drives at best. For significant programs, every byte had to be accounted for and carefully structured to fit the pieces you needed into the tiny RAM. When I started working on my first game - NinjaQuest (never released), I had to write my own sprite editor in assembler, and hand-animate the characters. side-scrolling had to be done very carefully to change a minimum of information on the screen. And don't even get me started on sound... All in 6502 assembler which I affectionately refer to as write-only code. BTW, the computers were around $2,000 in 1985 dollars.

(assembly code from Gamasutra: David H. Schroeder's Blog )

Today if I want to write a program, I boot up a $500 machine with a $350 4k UHD monitor. I run either Visual Studio or IntelliJ Idea, and have more than a terabyte of disk space to store the programs I write. If I want to animate things or have 3D characters, and if I'm not planning to sell the program, I can browse the internet and find some pretty decent sprites, art, pictures, and 3D models for the taking. With a couple hundred hours of effort I could put together a game with 3D characters moving around, perfect 3D sound, and have it be reasonably bug free. Thank you DirectX and OpenGL!

Suppose I wanted to build a sidescroller like NinjaQuest that I worked on for several hundred hours before I gave up. Between the tools available today and the incredible resources, I could build a better experience that looked and sounded better than I could ever have hoped in perhaps 100-200 hours. Then I could share it with the entire world over the internet.

Is coding easier today? For the things we can meaningfully compare, ABSOLUTELY. The barrier to entry is lower, the sources of information are higher quality, and there are tens of thousands of people out there who will answer your questions for fun. Your biggest problem today is, quite simply choice. So many choices of language, of environment, of platform.

Choice is, of course, a good thing. As are computers.

Profile photo for Ragu Rajagopalan

How has computer programming changed in the last decade?

Thanks for A2A.

Programming has changed a lot in the last decade.

Earlier (E), the major focus was on the functionality and then the importance given to the user interface.

Of late (New), the importance has been to deliver the same functionality over the different media - Phone / Tabs / Voice / Apps etc.

E - The target platforms were only a few (handful) and the developers were clearly told what was identified platforms (Mostly PC - Windows / IOS and internet based). So, the developers had a handle on the platform-based developer tools. Thoug

How has computer programming changed in the last decade?

Thanks for A2A.

Programming has changed a lot in the last decade.

Earlier (E), the major focus was on the functionality and then the importance given to the user interface.

Of late (New), the importance has been to deliver the same functionality over the different media - Phone / Tabs / Voice / Apps etc.

E - The target platforms were only a few (handful) and the developers were clearly told what was identified platforms (Mostly PC - Windows / IOS and internet based). So, the developers had a handle on the platform-based developer tools. Though browsers and their upgrades were changing, they posed little impact to the developers.

N - The platform is changing a lot. Newer models are released every now and then. The specifications change. Newer versions of OS are released and older versions are not supported for a longer period of time. Customization required for many of these platforms. So, the focus was on “delivery medium” and not core functionality !

E - Structured programming was the norm keeping in mind ease of maintenance aspect.

N - Quick and dirty releases are given more importance to keep up to the fast changing user dynamics.

E - Number of competitors were few and known. A newer version used to take a long time (1–2 years minimum) to get to the market and hence the companies had enough time to balance between Customer Experience and the desired functionality and stay competitive in the market.

N - many apps are being developed and deployed quite frequently. The competition is way too high and there is pressure among the App developers to ensure that their products meet customer expectations and beat the competition ! This means, Apps have to be developed fast and deployed soon. upgraded functionality is the norm everyday. If not upgraded, the App would be outdated !

E - Started collecting usage data. Not much analysis was done. The data was used for internal purposes and analysis. The data was analyzed by developers and analysts using tools. Any changes based on the analysis would take a minimum of few months to deploy.

N - Data has become the center point. Customer experience, Ads, rewards / awards are customized based on data. Automated data analysis (machine learning and AI) have made the decision making more intuitively and this has led to shortened cycles between the releases.

E - Presentation layers with graphs and stats were manually coded. The elements for analysis were controlled (limited) to the options provided by the developers.

N - Many tools are available which gives immediate results on data slicing and the options to analyze are currently limited by the tools and not by the features provided by the developers (100 times better than the limited features provided by the developers).

I can keep listing them but to keep the interests of the readers I am stopping here.

Profile photo for Krupa Tk

Programming in computer science has become much easier now. Earlier programming languages were very difficult. Also, in the coming days it'll become more easier.

For example : decades ago - assembly level programming, was done but it was very difficult.

Now for example : python is comparatively easier to program.

Profile photo for Quora User

Good common question … let’s start with the short simple answer

Early programming was manual, tedious, and hardware-specific, requiring programmers to know their machine intimately.

Modern programming is abstract, accessible, and collaborative with tools and languages designed to make developers more productive and creative.


Let’s break it down for more detailed answer … shall we:

1. Programming Languages

  • Early Days:
    • Programmers used machine code (binary) or assembly language that was just one step above binary and very hardware-specific.
    • Example in assembly: MOV AX, 1 ; Move the value 1 into registe

Good common question … let’s start with the short simple answer

Early programming was manual, tedious, and hardware-specific, requiring programmers to know their machine intimately.

Modern programming is abstract, accessible, and collaborative with tools and languages designed to make developers more productive and creative.


Let’s break it down for more detailed answer … shall we:

1. Programming Languages

  • Early Days:
    • Programmers used machine code (binary) or assembly language that was just one step above binary and very hardware-specific.
    • Example in assembly: MOV AX, 1 ; Move the value 1 into register AX ADD AX, 2 ; Add 2 to the value in AX Imagine writing an entire program like that! It was slow, detailed work, and debugging was a nightmare.
    • Later, higher-level languages like Fortran, COBOL, and Lisp emerged in the 1950s-60s. These were much closer to English and mathematical notation, making programming easier and faster.
  • Modern Times:
    • Today, we have powerful, user-friendly languages like Python, JavaScript, and Swift that are abstracted far away from hardware.
    • A modern Python program to do the same as the example above: x = 1 x += 2 print(x) This is cleaner, easier to read, and runs on any machine without modification.

2. Development Tools

  • Early Days:
    • Punch cards were used to write programs. Each line of code was punched onto a separate card, and you had to run the stack through a computer to execute it.
    • If you made a mistake (like dropping your cards or punching the wrong hole), you had to start over. Debugging was literal—you'd find "bugs" (like moths) stuck in relays.
  • Modern Times:
    • We now have IDEs (Integrated Development Environments) like Visual Studio Code, PyCharm, and Eclipse that:
      • Highlight syntax errors in real time.
      • Suggest code completions.
      • Integrate with debugging tools and version control.
      • Testing and deployment are often automated, making life much easier for developers.

3. Collaboration

  • Early Days:
    • Collaboration was mostly local. Programmers shared code by hand-writing it, mailing punch cards, or printing out listings.
    • Code reuse was rare because programs were tightly coupled with specific hardware.
  • Modern Times:
    • We now have version control systems (like Git) and platforms (like GitHub) that allow global collaboration.
    • Thousands of developers can contribute to open-source projects like Linux or TensorFlow.

4. Hardware and Constraints

  • Early Days:
    • Computers were massive, expensive, and had limited memory and processing power. You had to optimize for every byte and CPU cycle.
    • Example: Early Apollo mission code was written for computers with only 64KB of memory!
  • Modern Times:
    • Hardware is cheap and powerful. Most developers don’t need to worry about memory management or optimization unless they’re working on specific fields like embedded systems or gaming.

5. Accessibility

  • Early Days:
    • Programming was exclusive to a handful of specialists who worked for governments, universities, or large corporations.
    • Learning resources were scarce—no YouTube tutorials or online forums.
  • Modern Times:
    • Anyone with a computer can learn to program. There’s an abundance of free resources (YouTube, Coursera, Codecademy).
    • Programming is taught in schools, and there are coding bootcamps for quick upskilling.

6. Paradigms

  • Early Days:
    • Programming was mostly procedural—step-by-step instructions.
    • Object-oriented programming (OOP), functional programming, and agile development methodologies were non-existent.
  • Modern Times:
    • We have a variety of paradigms (OOP, functional, declarative) that make complex problems easier to solve.
    • Agile, DevOps, and other methodologies streamline teamwork and development.

Final Word:

Today's programmers owe a lot to the pioneers who worked through the painstaking early days to pave the way for the user-friendly tools we enjoy now!

Profile photo for Dani Richard

Go back 30 years takes us to 1991 and 40 years 1981.

By 1981, Mainframes were still king of computing and Cray computers were the fastest computers.

By 1981 the Pascal language was on the rise. SQL and Relational Databases were just getting started. Unix was still trying getting a foothold outside of academics. C compilers were found on many platforms. Motorola 68000 was core of SUN computers. FORTRAN and COBOL were still taught in colleges.

By 1981, “The Art of Computer Programming” by Donald Knuth had only published 3 volumes.

Robert Sedgwick’s “Algorithms” first edition, which used Pascal, was

Go back 30 years takes us to 1991 and 40 years 1981.

By 1981, Mainframes were still king of computing and Cray computers were the fastest computers.

By 1981 the Pascal language was on the rise. SQL and Relational Databases were just getting started. Unix was still trying getting a foothold outside of academics. C compilers were found on many platforms. Motorola 68000 was core of SUN computers. FORTRAN and COBOL were still taught in colleges.

By 1981, “The Art of Computer Programming” by Donald Knuth had only published 3 volumes.

Robert Sedgwick’s “Algorithms” first edition, which used Pascal, was a year in the future at 1982.

“Fundamentals of Computer Algorithms” by Horowitz and Sahni had been published in 1978. My personal copy was a 12 printing first edition.

I will list some items from table contents:

Chapter 2: Elementary Data Structures. Stacks, queues, trees, heaps, heap stores, sets, disjoint set union, graphics, hashing

Chapter 3: Divide-and-Conquer: Binary search, Finding max and min, Mergsort, Quicksort, Selection, Strassen’s matrix Multiplication.

Chapter 4: The Greedy Method: The general method, Optimal storage on Tapes, Knapsack problem, Job sequencing with deadlines, Minimum spanning trees, Single source shortest paths.

Chapter 5: Dynamic Programming: The general method, Multistage graphics, All pairs shortest paths, Optimal binary search trees, 0/1 knapsack, Reliability design, The Traveling salesperson problem, Flow shop scheduling

Chapter 6: Basic Search and Traversal Techniques: The techniques, Code optimization, And / Or graphs, Game Trees, Biconnected components and depth first search

Chapter 7: Backtracking: The general method, The 8-queens problem, Sum of subsets, Graph coloring,, Hamiltonian cycles, Knapsack problem,.

Chapter 8: Branch-and-Bound: The method, 0/1 knapsack problem, Traveling salesperson, Efficiency considerations.

Chapter 9: Algebraic Simplifications and Transformations: The general method, Evaluation and interpolation, The fast Fourier transformation, Modular arithmetic, Even faster evaluation and interpolation.

Chapter 10: Lower Bound Theory: Comparison trees for Storting and searching, Oracles and Adversary Arguments, Techniqus for algebraic problems, Some lower bounds on parallel computation.

Chapter 11: NP-Hard and NP-Complete Problems: Basic concepts, Cook’s theorem, NP-Hard graph problems, NP-Hard scheduling problems, NP-Hard code generation problems, Some simplified NP-Hard problems.

That should be a good view of “state of the art” for a undergraduate textbook from 1978.

I suggest you also examine “The Mother of All Demos” at The Demo - Doug Engelbart Institute

This took place in 1968 on mainframes.

Profile photo for Tom Borkowski

In the early 70’s, a programmer would keypunch his computer program using 80 column punch cards. Then he would feed his deck of cards into a card reader attached to the computer. And then he would look at his printout to determine if his program worked.

Mini-computers came into use during the 70’s with multiple terminals hung onto them.

Programming students would sign up for a time slot, and key in their BASIC programs.

The first BASIC was an interpreted programming system. While sitting at a terminal, students would run their program, make corrections, and run them again until it worked or their

In the early 70’s, a programmer would keypunch his computer program using 80 column punch cards. Then he would feed his deck of cards into a card reader attached to the computer. And then he would look at his printout to determine if his program worked.

Mini-computers came into use during the 70’s with multiple terminals hung onto them.

Programming students would sign up for a time slot, and key in their BASIC programs.

The first BASIC was an interpreted programming system. While sitting at a terminal, students would run their program, make corrections, and run them again until it worked or their time slot ran out.

When the first IBM Personal Computer came out in 1981, it included a BASIC interpreter that had been written by Bill Gates. Programmers used it pretty much the same as those mini-computer systems.

Profile photo for Joel Schlecht

It really depends on how “early" we are talking about. If you go back to the dawn of electrical computers, it was very different.

Back in the 1940s, computers were mostly analog and built in functional blocks. You would have a circuit that did the calculation you wanted then put another circuit after that to do another manipulation of the data etc… the programming of the computer was the arrangement of those functional blocks and adjustment of the analog circuit in order to accommodate your problem.

Let's say for instance you needed to multiply by 32 and then add 5, you would have a circuit that

It really depends on how “early" we are talking about. If you go back to the dawn of electrical computers, it was very different.

Back in the 1940s, computers were mostly analog and built in functional blocks. You would have a circuit that did the calculation you wanted then put another circuit after that to do another manipulation of the data etc… the programming of the computer was the arrangement of those functional blocks and adjustment of the analog circuit in order to accommodate your problem.

Let's say for instance you needed to multiply by 32 and then add 5, you would have a circuit that multiplied, you set it for 32. Then you connect that output to another circuit that adds and then set it for 5. In many cases, you would use patch cables to do the connecting and set some sort of dial to set the values. This type of computer could be useful in order to say compute firing angles for artillery. In many cases however, the end computer was set in hardware for that particular purpose and the end user could just alter the data. The original eneiac was a digital computer but still used the functional block connection method. Soon after however, the stored memory program came to prominence.

In function… that's not too different from what a modern processor does. A cpu may have an adder, a ring counter, etc… these are just functional blocks that the cpu sends data to by way of a demultiplexer. An instruction comes in, gets demultiplexed to a certian line that will activate a specific functional block. That demultiplexer allows a command to do the rewiring on the fly instead of having to do it by hand. Basically, you were programming in machine code by physically connecting up the machine! So, the end result would be the same, but the “programming" would take much longer. In addition, if you ran out of functional blocks on your computer, you be would need to take the intermediate result, reprogram the computer and keep going. This was fixed by the accumulator which takes the result and puts it back at the beginning for further processing.

The stored program/accumulator model or Von Neumann architecture (for the computer scientist who came up with it) allowed for more complicated programs to run and lead to the development of computer languages.

It was very different back then for programmers.

Your response is private
Was this worth your time?
This helps us sort answers on the page.
Absolutely not
Definitely yes
Profile photo for Ernest Ivy

In college there are new books with updated material each semester and the same goes for computer programming…new and updated material is constantly being introduced and those interested in computer programming have to read, study, and learn the advancements to stay ahead in that field.

Profile photo for Ralf Quint

Well, 20+ years ago, most programmers did actually still know what they were doing and why. Today, far too many “programmers” (or whatever fancy term they use these days) are stuck with some fancy “framework of the month” and kewl fads, but seriously lacking in regards to the basics and why they are doing things…

Profile photo for Eric Buckley

When you built a website 20 years ago, you did it with HTML and CSS. Not many other tools or libraries were used.

No social sharing.

No front-end or responsive libraries.

Also, CSS was still evolving. Each browser, including the long forgotten Netscape, handled CSS differently. So coding for all browsers was a huge pain in the ass.

Even HTML was slightly different across platforms. IE wanted stuff coded its own way. Netscape, another way. Netscape was the most popular browser after it’s release in 1994, but they disappeared as support ceased in 2008.

Chrome was not released until 2008. Hard to beli

When you built a website 20 years ago, you did it with HTML and CSS. Not many other tools or libraries were used.

No social sharing.

No front-end or responsive libraries.

Also, CSS was still evolving. Each browser, including the long forgotten Netscape, handled CSS differently. So coding for all browsers was a huge pain in the ass.

Even HTML was slightly different across platforms. IE wanted stuff coded its own way. Netscape, another way. Netscape was the most popular browser after it’s release in 1994, but they disappeared as support ceased in 2008.

Chrome was not released until 2008. Hard to believe they came on the scene only 10 years ago.

Firefox was also not around 20 years ago. They didn’t release until 2004.

Anyway, even with only 2 major browser on the scene, it was a nightmare to code for both browsers.

Responsive design and mobile devices were not mainstream at all. So all websites were written for desktops or laptops. Max screen resolution was, on average, 640x480 with a few at 1024x768.

Today, there are a wealth of choices and tools for developing a rich user experience for a website.

Profile photo for Dani Richard

My “style” of computer code has changed dramatically since I started coding.

I started with cards and paper tape on machine with 8K to 32K.

My “programmer time” vs “computer time” trade has dramatically changed. I don’t have to squeeze code into small memory.

When I went from programming in Assembly to Pascal, my data structures got a lot more complicated. I went from simple linear searched array to using binary look up, Hash Tables, trees, and B-Trees.

Have working as a “Systems Engineer” I saw text processing in a different light. I found that using “pattern matching” I could extract information

My “style” of computer code has changed dramatically since I started coding.

I started with cards and paper tape on machine with 8K to 32K.

My “programmer time” vs “computer time” trade has dramatically changed. I don’t have to squeeze code into small memory.

When I went from programming in Assembly to Pascal, my data structures got a lot more complicated. I went from simple linear searched array to using binary look up, Hash Tables, trees, and B-Trees.

Have working as a “Systems Engineer” I saw text processing in a different light. I found that using “pattern matching” I could extract information from technical documents in PDF, MS Word, or dumps of Relational Data Bases.

I was able to find inconsistencies and error within and between this human readable documents.

The program I wrote would take 30 second on modern laptops take several megabytes of storage. I used a language I had used in the 70’s. For every second on a modern computer would have at least an hour on those old mainframes.

The level I think and program at is very different when when I started programming back in 1968.

My style continues to evolve as I program and learn more and more.

I also experiment with other languages like LISP and Prolog.

Profile photo for Ankit Solanki

The application has changed but the language is the same, apart from that there are a couple of upgrades in bits. For e.g languages such as Python, C++, etc. are old languages and are still used by programmers. Hence the language remains the same, but its application has changed drastically.

When python was launched in the late 90s, it was used to create the user interface, but now we see that python is used for data analysis, web development, game development, Software development, etc. and the same goes for other programming languages.

In the last decade or so the popularity of these languages

The application has changed but the language is the same, apart from that there are a couple of upgrades in bits. For e.g languages such as Python, C++, etc. are old languages and are still used by programmers. Hence the language remains the same, but its application has changed drastically.

When python was launched in the late 90s, it was used to create the user interface, but now we see that python is used for data analysis, web development, game development, Software development, etc. and the same goes for other programming languages.

In the last decade or so the popularity of these languages have also changed for e.g in 2010 Java and Java script were pretty popular languages, in 2015, the popularity rankings changed significantly, python moved in ranks to replace PHP. Notably, R is used for statistics programming, etc.

Hope this answers your question :)

Profile photo for Viktor T. Toth

Low level programming? Not that different.

Specific details differ. A modern CPU has a much more extensive, more streamlined instruction set than an early stored program computer. The programmer does not need to worry that much about memory and execution speed since modern hardware is more than adequate for most tasks, even when the code is terribly inefficient. And it is much easier to write programs using fast, reliable SSDs, hard drives and cloud storage than messing with punch cards or similar archaic storage mediums. But the basic algorithms remain the same, the basic challenges remain the

Low level programming? Not that different.

Specific details differ. A modern CPU has a much more extensive, more streamlined instruction set than an early stored program computer. The programmer does not need to worry that much about memory and execution speed since modern hardware is more than adequate for most tasks, even when the code is terribly inefficient. And it is much easier to write programs using fast, reliable SSDs, hard drives and cloud storage than messing with punch cards or similar archaic storage mediums. But the basic algorithms remain the same, the basic challenges remain the same when you write code in assembler or a relatively “low-level” programming language like C. There are only so many ways you can handle an interrupt, move bytes through an I/O port, or solve a system of partial differential equations numerically.

On the other hand… application programming has become conceptually very different compared to the early days. What does a variable represent, for instance? A number? A string? An array? An object with properties and methods? How about a single variable representing an entire cloud service? A user identity? A complex application?

On a strictly technical level, of course, there is little difference between assigning a fancy name to a collection of subroutines or simply referring to them as, say, subroutine 23 on magnetic drum 7. But we think about these things differently. Instead of bits and bytes, numbers and strings, we request, say, an authentication token that authorizes our code to perform a specific action on behalf of a user, and then use this token when we generate a service request. Programmers working on application code at this level do not worry about the nitty-gitty details of how many bytes that token occupies or how the memory it uses is freed afterwards; they worry about the token’s persistence.

Another difference is that nowadays, many applications “live” in a complex, networked environment with code running in several different contexts: e.g., JavaScript code managing the UI of a Web application, server-side (e.g., PHP, C#) code performing the application logic, database queries running against records on a database server, while accessing cloud storage using yet another software interface. This is quite a change from the old school, one computer, one CPU, one user conceptual foundation that characterized personal computing decades ago, or the time-share, batch processing computer environments of mainframes.

Yet another major difference is the emphasis on security. Back in the old days, users were assumed to be people who had authorized access to the system and whose objective was to make things work. So certain bugs were… acceptable. If typing too many characters into a field crashed the application, you could just stick a note on the computer monitor saying “Don’t type more than 20 characters in field X” and that was it, at least until a new version was developed, possibly months if not years later. Today? A bug like that is readily exploited by hostile individuals, cybercriminals, you name it. Today, even a simple Web application needs to be more robust, security-wise, than highly secure military systems decades ago.

So yes, if a programmer, even an experienced programmer from, say, 1991 was transported to the present and entrusted with developing even a simple Web application, he would likely fail miserably at first. The very idea that the same PHP file contains code that runs on the server side, code that runs on the client workstation, code that runs on a database back-end, and perhaps even code that runs on a remote cloud service? It would take quite some time to digest and process these things conceptually. Meanwhile, he would have no appreciation whatsoever of the security challenges that ubiquitous Internet connectivity imposes even on simple solutions.

On the other hand, experienced present-day programmers can do things that back 30, 40 years ago we never even dreamed possible. Take the venerable Commodore-64 for instance: at the time a revolutionary machine with its reasonably capable 8-bit processor running at 1 MHz, and its whopping 64 kilobytes of memory. Now you wouldn’t expect a machine like that to run a windowing operating system or full-motion video. Yet people found ways to make it happen. Let me tell you, back in 1983 or thereabouts when I was actively working on developing games for the C64, no sane person thought something like this would ever be possible on that computer:

So yes, I daresay our profession changed a lot, even as some things remained the same.

Profile photo for Dipt Chaudhary

A programmer was asked an interview question - If you had to construct a swimming pool for Mark Zuckerberg and had no budget limit to your project, describe the pool you will make?

Now this guy didn't have a clue about how swimming pools are made or what are the requirements for one. But his answer was what you can expect out of a programmer.

His answer -
Let's consider the components here - A pool, a fountain nearby, a changing room, a bar, a resting/sunbathing area, and some other stuff rich people have by their pools. Since here we are referring only the pool, I will only describe the pool.

A programmer was asked an interview question - If you had to construct a swimming pool for Mark Zuckerberg and had no budget limit to your project, describe the pool you will make?

Now this guy didn't have a clue about how swimming pools are made or what are the requirements for one. But his answer was what you can expect out of a programmer.

His answer -
Let's consider the components here - A pool, a fountain nearby, a changing room, a bar, a resting/sunbathing area, and some other stuff rich people have by their pools. Since here we are referring only the pool, I will only describe the pool. Also since Mark loves Star Wars, this would be a Star Wars themed pool.

Let's consider the pool to be rectangular with a standard size of 16ft by 32ft.
Now for lights, there will be one at every 4 ft along the perimeter. The colors of the lights will change if Mark decides to select Sith theme or a Jedi theme. On a closer look at the each of the light you will see a distinct star wars character on each of them, which will only be visible when you are close enough. The lights will also ..... **
gets interrupted by the interviewer**

Interviewer - Let's move on to another question.

Programmer - But I haven't finished my answer sir....... in fact, I had barely started.

Interviewer(who was a wise man) - You don't need to, you're shortlisted for the next round.

Programming is about taking a problem (making the pool) and defining a solution to it(size of pool, lights, right down to each light) where every small detail is elaborated cause you are trying to convey it to the dumbest thing on a planet(a computer)

Edit - If you enjoyed this do check out Dipt Chaudhary's answer to Why do some computer programmers develop amazing software or new concepts, while some are stuck with basic programming work?

Profile photo for Nikhil Kumar

According to my point of view Computer programming changed this way

  • Google is a little more than ten years old. But the company's most popular services have only risen to popularity over the last five to seven years. In addition to Google search, can you imagine a world without Gmail, Blogger or Picasa?
  • Having the Android phone has become just another basic necessity to survive after Food, Water and Air. And if we look through your phone we have different application for different purpose. From shopping to rides all are available at one click.
  • Competitive Coding has seen a sharp increase because

According to my point of view Computer programming changed this way

  • Google is a little more than ten years old. But the company's most popular services have only risen to popularity over the last five to seven years. In addition to Google search, can you imagine a world without Gmail, Blogger or Picasa?
  • Having the Android phone has become just another basic necessity to survive after Food, Water and Air. And if we look through your phone we have different application for different purpose. From shopping to rides all are available at one click.
  • Competitive Coding has seen a sharp increase because nowadays learning any programming language has become accessible to everyone before it was only the engineering or CS students hence the level has increased.
  • Recruitment on the basis of coding is at the top. Gone are the where you would be asked simple DS or algorithms…now you’d be asked to implement something ‘using’ the DS and algorithms via CODING
  • Language tooling is richer today. A programming language was usually a compiler and perhaps a debugger. Today, they usually come with the linter, source code formatter, template creators, self-update ability and a list of arguments that you can use in a debate against the competing language.
  • Even programming languages took a side on the debate on Tabs vs Spaces.
  • Adobe Flash, which was the only way to provide some smooth interaction on the web, no longer exists, thankfully. Now we have to develop on three different platforms with entirely different programming models in order to provide the same level of interaction.
  • IDEs and the programming languages are getting more and more distant from each other. 20 years ago an IDE was specifically developed for a single language, like Eclipse for Java, Visual Basic, Delphi for Pascal etc. Now, we have text editors like VS Code that can support any programming language with IDE like features.
  • Code must run behind at least three levels of virtualization now. Code that runs on bare metal is unnecessarily performant.
  • Cross-platform development is now a standard because of wide variety of architectures like mobile devices, cloud servers, embedded IoT systems. It was almost exclusively PCs 20 years ago
Profile photo for Alan Mellor

The further back you go, the less abstract it was.

Earliest computers needed custom circuits built. They were later configurable to a degree using patch cords - physical cables plugged into jack sockets.

Program source code started out on punch cards and paper tape, where you can see the physical binary pattern of commands. It was only later where that gave way to typewriter style keyboards with letters on. Today, voice input can work.

The programming languages themselves started out as primitive lists of machine-level operations. The story of languages has been to raise the abstraction level clo

The further back you go, the less abstract it was.

Earliest computers needed custom circuits built. They were later configurable to a degree using patch cords - physical cables plugged into jack sockets.

Program source code started out on punch cards and paper tape, where you can see the physical binary pattern of commands. It was only later where that gave way to typewriter style keyboards with letters on. Today, voice input can work.

The programming languages themselves started out as primitive lists of machine-level operations. The story of languages has been to raise the abstraction level closer to how humans think about a problem. Less about how the hardware works, and more about modelling the problem at hand.

Profile photo for Ken Gregg

Think of it this way.

You are designing and building a machine, whether it’s mechanical or electronic, and you want it the machine to perform some specific operation (behave in a very specific way) when presented with a very specific pattern of inputs. You want it to perform a different operation when presented with a different pattern of inputs. You continue to add more operations, where each operation is effectively a reaction to a different pattern of inputs. Eventually, you have created a machine which performs several different operations, when presented with different patterns of inputs.

Think of it this way.

You are designing and building a machine, whether it’s mechanical or electronic, and you want it the machine to perform some specific operation (behave in a very specific way) when presented with a very specific pattern of inputs. You want it to perform a different operation when presented with a different pattern of inputs. You continue to add more operations, where each operation is effectively a reaction to a different pattern of inputs. Eventually, you have created a machine which performs several different operations, when presented with different patterns of inputs. If you then devise a way to feed a sequence of different input patterns into the machine, the machine can carry out a sequence of different operations, according to that sequence of input patterns.

The machine might be anything from a mechanical weaving loom to an electronic calculating machine.

You have a “programmable” machine. The sequence of different patterns fed into the machine is the “program” the machine will dutifully follow. Each input pattern would be considered and individual “instruction,” and the sequence of instructions make up the “program.” The machine carries out those input instructions, one at a time in sequence, effectively “running” or “executing” the program.

Each input pattern, in modern digital computer terms, is a “machine instruction.” A sequence of those machine instruction is a “program.”

In the 1830s, mathematician Charles Babbage designed a mechanical calculating machine known as the Difference Engine. Ada Lovelace — officially, Augusta Ada Byron, and later Augusta Ada King, Countess of Lovelace) — trained in mathematics and was inspired by the prototype of Babbage’s Difference Engine. Babbage had plans for a new mechanical device, the Analytical Engine, which Lovelace realized could carry out long sequences of mathematical calculations. As an example, she wrote a sequence that would calculate Bernoulli numbers. Only a small portion of the Analytical Engine was ever actually built, so Lovelace never got a chance to actually run that program on actual hardware. Lovelace passed away in 1852 at the age of 36. Nevertheless, Lovelace’s sample sequence of “instructions” for calculating Bernoulli numbers on the Analytical Engine is considered by most computing historians as the first computer program.

All of this significantly predated the arrival of the first digital computers in the 1940s. But the idea is the same. Instead of using mechanical components, electronic components were used to implement the behavior of the machine. Each machine had its own set of input patterns that it recognized to carry out specific operations. One pattern would cause the machine to add two numbers. Another pattern would cause the machine to subtract one number from another. A pattern triggering a specific operation was (and still is) referred to as a “machine language instruction.” The set of patterns recognized by the machine was (and still is) referred to as the “machine language instruction set” or “instruction set architecture.”

To this day, ever computer processor architecture has a unique machine language instruction set.

Knowing the machine language instruction set — binary sequences representing machine language instructions — a person can write a program for that machine, even if they don’t actually have the machine itself. It’s a matter of understanding how the machine is designed, and which patterns trigger which operations to occur.

Just like Ada Lovelace did back in the 1800s.

The Ada programming language, which emerged in 1983 and has evolved ever since, is named in her honor.

Profile photo for Alan Mellor

Harder than in 1997.

It isn't.

It's shifted in scale a bit, but even in 97 we had global scale eComnerce.

It was harder then to write a dynamic web app, with limited html, and Internet Explorer, and IIS. A database cost a ton of cash, with no open source ones available.

There was no cloud. No AWS. No tutorials on the web, because not enough people used the web. It was on an increase, but pre-broadband.

Today, we have a different set of problems. Multiple devices present a UX challenge, User numbers are higher. Expectations are higher. But we have better tools and building blocks.

Interesting. I'm no

Harder than in 1997.

It isn't.

It's shifted in scale a bit, but even in 97 we had global scale eComnerce.

It was harder then to write a dynamic web app, with limited html, and Internet Explorer, and IIS. A database cost a ton of cash, with no open source ones available.

There was no cloud. No AWS. No tutorials on the web, because not enough people used the web. It was on an increase, but pre-broadband.

Today, we have a different set of problems. Multiple devices present a UX challenge, User numbers are higher. Expectations are higher. But we have better tools and building blocks.

Interesting. I'm not sure it has got harder, and yet it feels it sort-of has …

But it wasn't easier back then.

Profile photo for Matthew Park Moore

“What have been the major advances in computer programming over the past 40 years?”

A lot fewer than most outsiders would guess.

My career spanned the time in question. Computer hardware improved out of all recognition. Software got bigger, to fill the available space, and more complex. But the craft of writing software was still essentially the same at the end of my career as at the beginning.

A few things changed: languages that support structured programming came in. That helped a lot with bigger projects. Language support for dynamic memory allocation helped a lot with bigger interactive or t

“What have been the major advances in computer programming over the past 40 years?”

A lot fewer than most outsiders would guess.

My career spanned the time in question. Computer hardware improved out of all recognition. Software got bigger, to fill the available space, and more complex. But the craft of writing software was still essentially the same at the end of my career as at the beginning.

A few things changed: languages that support structured programming came in. That helped a lot with bigger projects. Language support for dynamic memory allocation helped a lot with bigger interactive or transactional systems, but it was a two-edged sword. We never had memory leaks in the FORTRAN days. Garbage collection solved the memory leak problem, at the cost of execution time overhead and unpredictable delays. Object oriented languages make managing big projects easier, but again they have pitfalls. Support libraries proliferated - you never need to write your own searching and sorting algorithms anymore. Integrated development environments make the day-to-day tasks of writing software go faster.

So, huge progress in hardware, modest progress in computer languages and development tools. But it was all wasted by feature bloat and complexity. Software gets exponentially harder to write the more complex it is. Managers always want more complexity. The only thing that restrains them is the fear that the software will not work at all. As soon as the current feature set shows any hint of stability, managers demand more features. Never mind that 95% of customers never use 95% of the current features, or even know about them: we must have more! So software always teeters on the edge of collapse, with programmers trying to pull it back from the cliff and managers trying to push it over.

I predict that if you ask this question 40 years from now you will get the same answer.

Profile photo for Aman Garg

“Wow. You work in the computer industry. It must be a real challenge keeping up, given how quickly things in that industry change.”

That’s a common sentiment you’ll hear when the person with whom you’re engaging in casual conversation discovers that you work in the information technology field. In such circumstances, I typically try to enhance my mystique by playing along with such assertions, talking up what a challenge it is to work in the computer field, while stressing how smart, intelligent and handsome one must be to excel in this field, but the truth of the matter is, it’s all just a rou

“Wow. You work in the computer industry. It must be a real challenge keeping up, given how quickly things in that industry change.”

That’s a common sentiment you’ll hear when the person with whom you’re engaging in casual conversation discovers that you work in the information technology field. In such circumstances, I typically try to enhance my mystique by playing along with such assertions, talking up what a challenge it is to work in the computer field, while stressing how smart, intelligent and handsome one must be to excel in this field, but the truth of the matter is, it’s all just a rouse.

The more things change, the more they stay the same …

If you want to know the truth, programming really hasn’t changed all that much since Ada Lovelace hacked out some code for Charles Babbage’s programmable machine way back in the eighteen-hundreds.

Computers are useful because they can do three or four things well.

First, computers can manipulate and store vast amounts of data. Sure, an iPod can store more data than a Commodore 64 of yesteryear, but it’s a truism that hasn’t changed over time: computers are useful because they can manipulate and store data.

Secondly, computers can do conditional logic. Basically, a computer can process an if statement, performing some logic if a condition is true, and other logic if a given condition is false. Branched logic based on data the computer is maintaining is really the only impressive trick a computer program is capable of performing.

Thirdly, computers are fast. You can throw a whole bunch of data manipulations (Point #1) and conditional logic (Point #2) in a while loop that iterates a million times, and the whole process will complete within the blink of an eye. But again, the fact that computers are fast isn’t a revelation of modern day programming. Sure, a modern processor fluctuating through three billion cycles per second (3Ghz) is certainly faster than a Vic 20 running at a million cycles per second (1Mhz), but this variation in speed is just that: a variation; and as impressive as this variation is, it hasn’t had any fundamental effect on how we program computers.

Evolution without revolution

Of course, that’s not to say the way we program computers hasn’t evolved. Certainly the manner in which we interact with computers has changed. We now input data using touchscreens instead of punch cards, and we can view the responses a computer generates on a high-definition LCD monitor instead of a green screen. And computer languages have certainly evolved. For example, Java itself is an ‘object-oriented’ programming language, which means it provides improved facilities to help to organize data (Point #1). And new computer programming languages like Scala and Clojure have evolved to optimize performance on these big multi-processor machines (Point #3) that are becoming cheaper and cheaper these days. But no matter what the language is, or how the syntax varies from one programming language to another, they all boil down to the same three basic things: managing data by declaring and initializing variables, performing conditional logic with if…else semantics, and using various types of loops to ensure that all of these things are being done a heck of a lot faster than a million monkeys sitting in front of a million typewriters.

Profile photo for Mark VandeWettering

Yes. But with some caveats.

The first mainframe computer I ever used was the PDP-10 at the University of Oregon. It cost a lot (hundreds of thousands of dollars back then) and ran the TOPS-10 operating system. I think I learned Pascal, FORTRAN and COBOL on that machine, as well as some PDP-10 and PDP-8 assembler. But you had to have an account, which you could run out of money: timesharing, yay!

Today for $35, you can order one of these:

To be fair, you'll need to add a disk drive (less then $10) and maybe a USB keyboard or so, but for $50 you can have a computer of your own that would abso

Yes. But with some caveats.

The first mainframe computer I ever used was the PDP-10 at the University of Oregon. It cost a lot (hundreds of thousands of dollars back then) and ran the TOPS-10 operating system. I think I learned Pascal, FORTRAN and COBOL on that machine, as well as some PDP-10 and PDP-8 assembler. But you had to have an account, which you could run out of money: timesharing, yay!

Today for $35, you can order one of these:

To be fair, you'll need to add a disk drive (less then $10) and maybe a USB keyboard or so, but for $50 you can have a computer of your own that would absolutely stomp the old PDP-10 into dust. In fact, if you wanted to play with the PDP-10, you could simulate the PDP-10 on this hardware, and pretend like you are hacking away at an old ADM-3A at the computing center. Except that the simulator actually runs the virtual PDP-10 at several times the original speed.

But you don't want to run that old stuff, you wanted to program, right? Which languages are supported? Lots and lots of them.

  • C and C++, naturally.
  • Python, a great introductory language, and very useful.
  • Scratch, an entry level programming language to help teach kids how to program.
  • JavaScript/JQuery/HTML, all the good web stuff.
  • Java
  • Lots of others. Lisp/Scheme/Go/bash/awk...

Part of learning a programming language is access. It's never been easier to get access to computers and language implementations.


Of course programming itself has gotten a bit more complex too. Our desire to build more and more complicated applications means that it does take a bit more work to do the projects that you might want. But we truly stand on the shoulders of giants, and I think we've gained rather more than we've list.


Profile photo for William Westlake

Fundamental skills. Today people go to code boot camps, they run off to various code competitions, then they think they know how to write software and think beeing l33t is cool. I am sorry to say that it just doesn’t work that way. In other areas of technology, say electrical or mechanical engineering, a person who reads popular mechanics and builds a robot in his garage using kits does not get to go to work at a corporation as a mechanical engineer. Developing software requires a serious education and years of hard practice, just like becoming a mechanical engineer, an electrical engineer, or

Fundamental skills. Today people go to code boot camps, they run off to various code competitions, then they think they know how to write software and think beeing l33t is cool. I am sorry to say that it just doesn’t work that way. In other areas of technology, say electrical or mechanical engineering, a person who reads popular mechanics and builds a robot in his garage using kits does not get to go to work at a corporation as a mechanical engineer. Developing software requires a serious education and years of hard practice, just like becoming a mechanical engineer, an electrical engineer, or a civil engineer.

Today we try to hire people and we come across many resumes that simply lack the requisite qualifications. At some point, out of desperation, we may hire someone like this, to fulfill a contract of some kind (management’s decision, not mine) and they seldom work out. Going to a boot camp may help you get started in programming, competitions may be fun and you may learn some tricks, but I will take quality over speed any time. Fast programmers usually slow a project down by inserting bugs that have to be diced out by other, more skilled coders.

Serious programmers are thoughtful, careful, and disciplined. They don’t just charge ahead and write code, they consider what needs to be accomplished very carefully. They also know how to work with others and will communicate ideas, ask questions, and answer questions, without being arrogant and obnoxious about it. In fact, I worked with a coder just a few years ago that seemed to think his win at some contest made his opinions more valuable than others, he was quite obnoxious, and not that knowledgeable.

I can tolerate arrogance only when combined with extreme competence, but most of the extremely competent coders I have know are also the nicest most engaging people to work with, it is the less competent and insecure ones that are trouble.

If you want to find out if you have a passion for programming, check out my YouTube channel: LagDaemon Programming

If you have questions about my videos, or just want to chat about software, contact me on twitter: Bill Westlake (@wwestlake) | Twitter

Profile photo for Miles Fidelman

A sense of perspective.

In the old days, programmers “programmed” - they were handed directions by systems analysts, and told to turn them into code. Maybe it was a flowchart, maybe a set of formulas - but the job of a programmer was basically that of a translator. Or maybe a bench technician (“here, turn this design into some prototype hardware” - bench techs are expected to sort of understand what they’re building - enough to tweak things so they work & how to fix things - they’re not expected, or trained, to design things, much less develop underlying theories & practices).

Today, folks confu

A sense of perspective.

In the old days, programmers “programmed” - they were handed directions by systems analysts, and told to turn them into code. Maybe it was a flowchart, maybe a set of formulas - but the job of a programmer was basically that of a translator. Or maybe a bench technician (“here, turn this design into some prototype hardware” - bench techs are expected to sort of understand what they’re building - enough to tweak things so they work & how to fix things - they’re not expected, or trained, to design things, much less develop underlying theories & practices).

Today, folks confuse “programming” with design, development, engineering - and think that just because they know how to pass a coding interview, or score high on hackerrank, or just write “hello, world” in Java, and they’re suddenly macrocosmic gods - who can sit down at a terminal, and start pounding keys - and actually do something useful. (Doesn’t work for 100 monkeys typing, either.)

Heck, even “hello, <your name>” has been diluted into “hello, world.”

About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025