Ultimate space simulation software

 
A-L-E-X
Galaxy Architect
Galaxy Architect
Posts: 3498
Joined: 06 Mar 2017 20:19

General global warming / climate change discussion

31 Jan 2019 06:50

perhaps some kind of "credit" similar to "likes" people get but based on what they have done for others, how they have helped the environment, etc.?
Interesting notion, but the idea of rating people based on their supposed 'achievements' gives me bad flashbacks to that Black Mirror episode "Nosedive" when our entire society is based on this superficial rating system. It made everyone shallow, fallacious and petty.
global surveillance issue
It really depends what surveillance we're talking about. Like you, I wouldn't want my personal information to be at the beck and call of a government in a Orwellian police-state of 24/7 household cameras and microphones. Yeah you might be 'secure', but your freedom and private life is being intruded upon and the psychological effect of knowing that someone is potentially watching you can be very damaging. Another aspect of this that was explored in 1984 was of course brain-washing, and by restricting peoples intellectual depth, made them powerless to domination via fear-tactics. Unfortunately brainwashing is an inevitable part of living in society. You and I were both brainwashed when we were little growing up, and we were further brain-washed through any education or interaction with those outside the original family we partook in.
What would be a 'benign' level of surveillance/ societal brainwashing? I'm not even going to guess - but you can be sure that it will be a part of our lives in a very immediate and intimate way.
going and to perhaps form some form of course-correction in case AI was veering off?
Human-input would be needed to start the ball rolling anyway. At that time, human values (hopefully universally beneficial ones) will be encoded into the core algorithms of the AI/ASI and ensure that despite the exponential nature of its processing power and equivalent mental abilities during Singularity it will maintain and expand upon those principles in a way that is benign to us in a easily-relatable and non-tricky way. These uncertainties in the evolution of an ASI will probably be the very reason why we'll never make one, or at least one with enough power to actually solve our problems on it's own within a useful timeframe. It would have great benefits to us for it to do so, but the slightest action that we perceive from our perspective as deviating from the core principles could be disastrous.
democracy in the machine
Instead of a 'ghost'... I like it.
You mentioned brainwashing, and then I saw this pop up lol-
https://www.oye.news/news/technology/vo ... ddgx9SB1qU
I agree about opposing surveillance from human authorities- they have a bad record with it, including surveilling and harassing civil rights  and environmental groups that only want to peacefully protest and even trying to provoke violence.  Government/police love surveilling others but hate it when it happens to them (like hackers releasing their dirty secrets)- its a huge double standard.  I believe we should be able to do it to them if they can do it to us.  That will keep them in line.  Otherwise, if AI can be trusted to keep info to itself and not dump it to any human authorities, that's a better way.  Thats a malignant type of surveillance that was happening against civil rights and environmental groups, the intentional effect was to chill any opposition. The authorities basically stood up for big corporations like Walmart who were selling products made in countries with sweat shops as well as other big companies (like fur manufacturers) and ignoring the will of the people (who actually pay their incomes via taxes.)  The Patriot Act empowered them to start calling libraries, online repositories, etc., to ask what sort of books or other material their patrons were reading.  And then there is the whole Samsung TV fiasco where they see what you were doing through your TV (very X filesesque) and tageting advertising, for example, a friend of mine went outside to mow his lawn and when he got back his inbox was loaded with lawn ads (and he never buys lawn stuff online.)

I love Black Mirror too, it's a modern-day X Files!  I can see that kind of competition arising that leads to pettiness so we would need to "downvote" people who exhibit that behavior haha!
Speaking of 1984 did you see this?  
https://www.bloomberg.com/news/articles/2019-01-29/apple-bug-lets-iphone-users-listen-in-on-others-via-facetime

They already had a way of turning cell phones into two way mics but this is even more direct.  I had heard Zuckerberg keeps a piece of tape on his webcam at all times because he knows any web cam can be turned on remotely.
I get nervous about companies like Google being at the forefront of AI, they've been super secretive about it, but they also have some unethical practices involving government influence and creating a pseudomonopoly by gobbling up the competition.  They've been fined billions by the EU but our government is too in with them and looks the other way.

We have a big anticorruption movement going on right now to get rid of all the dark money in politics that is giving undue influence to large corporations here that get by without regulation, letting them dump toxic waste in the environment and seize private land.  Is China's anticorruption movement going in a malignant direction?

Do you think AGI might have a better chance of succeeding than ASI?
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

01 Feb 2019 03:38

Interesting website. I've never heard of OYE before - what do you think of them?
I agree about opposing surveillance from human authorities- they have a bad record with it, including surveilling and harassing civil rights  and environmental groups that only want to peacefully protest and even trying to provoke violence.  Government/police love surveilling others but hate it when it happens to them (like hackers releasing their dirty secrets)- its a huge double standard.
Morality of actions aside, I think it is important to understand that the people in these governmental positions and companies are just humans like all of us, and will always fold into human desires. All examples you gave (and others) of sociopolitical disagreements and rough patches can be traced back to the failures of individuals. About ten or more years ago, I was big on grand conspiracy theories (I'm not really implying that is the case here btw, this is just a narrative example), but as soon as I realized the humbling notion that humans are just a bunch of self-concerned sorta-clever apes, the political and economical scene was all that more clearer and understandable for me. It was an amazing revelation that I'm surprised more people haven't pondered. In lue of this thought, this is why we need AI in the political stage. By their presence, we can remove this human error element. I know this represents a rather naive and optimistic facet to my otherwise flat cynicism, but the more I research the advances in, and the benefits of, a cybernetic integration in civilization, the more promising it sounds. It's all about balancing the human input with the machine's objectivity and impartiality in order to prevent corruption or ruthlessness.
peaking of 1984 did you see this?  
https://www.bloomberg.com/news/articles ... a-facetime

They already had a way of turning cell phones into two way mics but this is even more direct.  I had heard Zuckerberg keeps a piece of tape on his webcam at all times because he knows any web cam can be turned on remotely.
I get nervous about companies like Google being at the forefront of AI, they've been super secretive about it, but they also have some unethical practices involving government influence and creating a pseudomonopoly by gobbling up the competition.
The spyware on the phone is hardly a new development. Although I'm not convinced that such devices were purpose-made to spy on their users, from an intel perspective, it would be a natural second-function. I guess it's a question of how paranoid one is. That being said, my cam is also covered...

I agree with you about Google's crookedness - but on the other hand they are probably the best people to be making an AI. Hear me out: If you would want to have an AI that is unreachable by the public (i.e most hackers) and the best that money can design - of course you'd want that 'bot to be made by the most sophisticated and well-funded spyware company in the world :D. Think about it: the best software and technology in the world that is now available to the public was either made by former hackers or by military contractors. It's sad that only monopolizing and potentially corrupt companies can do this, but the world isn't perfect and maybe this AI will be the light at the end of tunnel and justify the means by its ends. Orrrr maybe it'll be a complete nightmare....
Is China's anticorruption movement going in a malignant direction?
It's surveillance strategy does violate some human rights, but as I said, a civilization cannot move in the technocratic direction that China wants to without breaking a few rules. It might turn out for the better, but to be honest I don't know enough about it to say anything of relevance on the subject. As stated before, China has always had an interesting relationship with it's citizens. It's customs and cultural ways of living are quite foreign to us Westerners and so we can sometimes be taken aback by it's political activities. What stands as 'corruption' for the Chinese government will mean precisely what opposes the agenda they have for their country. My only concern is how this new way of controlling society will adapt to the international stage. I have a sinking feeling it will, if it hasn't already.
Do you think AGI might have a better chance of succeeding than ASI?
It has already.

[youtube]0e0fdD5DpEI[/youtube]
The usefulness of AGI cannot be understated. It is easy to control, easy to program (relatively) and easy to comprehend for the non-technical: it's just a mathematical pattern aimed at completing a task. AGI can be sentient and even self-aware (whatever that means) - but only in the context of it's purpose. An ASI has a element of uncertainty and unpredictability: it will be fully sentient, with all the abstractions and pitfalls thereof - which is something we've never dealt with to this magnitude so far. I mean sure, biologists interact with clearly intelligent and sentient animals like chimps, orangutans, dolphins and crows, and I've always suspected my dogs have their own private thoughts (mostly that I'm boring and don't play or feed them enough), but we've NEVER interacted with something non-human that is as smart as us - or has the dangerous potential to be SMARTER then us. Yes, if done right ASI can be the gateway to technological nirvana, but I doubt this in the concept's pure sense. Much more likely is that ASI built by us will at first be modeled after a human brain (because it's easier), and that these builds, along with computing and neurological advances made at the time will help us development even more sophisticated machines, until Singularity will arrive. Due to our fear and uncertainty, it may not develop beyond that. Further research (but not study) will be curtailed, and we'll just get by with AGI and maybe the digital augmentation of our own minds. If we can be as certain as we can that the ASI will not betray us or act in dangerous unforeseen ways (which by it's logic is a circular statement), then ASI may exist in the far future.  
Futurum Fusionem
 
A-L-E-X
Galaxy Architect
Galaxy Architect
Posts: 3498
Joined: 06 Mar 2017 20:19

General global warming / climate change discussion

02 Feb 2019 05:58

Interesting website. I've never heard of OYE before - what do you think of them?
I agree about opposing surveillance from human authorities- they have a bad record with it, including surveilling and harassing civil rights  and environmental groups that only want to peacefully protest and even trying to provoke violence.  Government/police love surveilling others but hate it when it happens to them (like hackers releasing their dirty secrets)- its a huge double standard.
Morality of actions aside, I think it is important to understand that the people in these governmental positions and companies are just humans like all of us, and will always fold into human desires. All examples you gave (and others) of sociopolitical disagreements and rough patches can be traced back to the failures of individuals. About ten or more years ago, I was big on grand conspiracy theories (I'm not really implying that is the case here btw, this is just a narrative example), but as soon as I realized the humbling notion that humans are just a bunch of self-concerned sorta-clever apes, the political and economical scene was all that more clearer and understandable for me. It was an amazing revelation that I'm surprised more people haven't pondered. In lue of this thought, this is why we need AI in the political stage. By their presence, we can remove this human error element. I know this represents a rather naive and optimistic facet to my otherwise flat cynicism, but the more I research the advances in, and the benefits of, a cybernetic integration in civilization, the more promising it sounds. It's all about balancing the human input with the machine's objectivity and impartiality in order to prevent corruption or ruthlessness.
peaking of 1984 did you see this?  
https://www.bloomberg.com/news/articles ... a-facetime

They already had a way of turning cell phones into two way mics but this is even more direct.  I had heard Zuckerberg keeps a piece of tape on his webcam at all times because he knows any web cam can be turned on remotely.
I get nervous about companies like Google being at the forefront of AI, they've been super secretive about it, but they also have some unethical practices involving government influence and creating a pseudomonopoly by gobbling up the competition.
The spyware on the phone is hardly a new development. Although I'm not convinced that such devices were purpose-made to spy on their users, from an intel perspective, it would be a natural second-function. I guess it's a question of how paranoid one is. That being said, my cam is also covered...

I agree with you about Google's crookedness - but on the other hand they are probably the best people to be making an AI. Hear me out: If you would want to have an AI that is unreachable by the public (i.e most hackers) and the best that money can design - of course you'd want that 'bot to be made by the most sophisticated and well-funded spyware company in the world :D. Think about it: the best software and technology in the world that is now available to the public was either made by former hackers or by military contractors. It's sad that only monopolizing and potentially corrupt companies can do this, but the world isn't perfect and maybe this AI will be the light at the end of tunnel and justify the means by its ends. Orrrr maybe it'll be a complete nightmare....
Is China's anticorruption movement going in a malignant direction?
It's surveillance strategy does violate some human rights, but as I said, a civilization cannot move in the technocratic direction that China wants to without breaking a few rules. It might turn out for the better, but to be honest I don't know enough about it to say anything of relevance on the subject. As stated before, China has always had an interesting relationship with it's citizens. It's customs and cultural ways of living are quite foreign to us Westerners and so we can sometimes be taken aback by it's political activities. What stands as 'corruption' for the Chinese government will mean precisely what opposes the agenda they have for their country. My only concern is how this new way of controlling society will adapt to the international stage. I have a sinking feeling it will, if it hasn't already.
Do you think AGI might have a better chance of succeeding than ASI?
It has already.

[youtube]0e0fdD5DpEI[/youtube]
The usefulness of AGI cannot be understated. It is easy to control, easy to program (relatively) and easy to comprehend for the non-technical: it's just a mathematical pattern aimed at completing a task. AGI can be sentient and even self-aware (whatever that means) - but only in the context of it's purpose. An ASI has a element of uncertainty and unpredictability: it will be fully sentient, with all the abstractions and pitfalls thereof - which is something we've never dealt with to this magnitude so far. I mean sure, biologists interact with clearly intelligent and sentient animals like chimps, orangutans, dolphins and crows, and I've always suspected my dogs have their own private thoughts (mostly that I'm boring and don't play or feed them enough), but we've NEVER interacted with something non-human that is as smart as us - or has the dangerous potential to be SMARTER then us. Yes, if done right ASI can be the gateway to technological nirvana, but I doubt this in the concept's pure sense. Much more likely is that ASI built by us will at first be modeled after a human brain (because it's easier), and that these builds, along with computing and neurological advances made at the time will help us development even more sophisticated machines, until Singularity will arrive. Due to our fear and uncertainty, it may not develop beyond that. Further research (but not study) will be curtailed, and we'll just get by with AGI and maybe the digital augmentation of our own minds. If we can be as certain as we can that the ASI will not betray us or act in dangerous unforeseen ways (which by it's logic is a circular statement), then ASI may exist in the far future.  
I agree with needing AI to bandage over human weaknesses maybe it is what we need to finally forge ahead and achieve scientific advancements (especially in space) faster than our current molasses-like pace.  How do you feel about universities like M.I.T. developing AI rather than Google?  M.I.T. has been much more open about it and they have a fantastic and futuristic lab.

About the site, I knew about some of the things posted there from beforehand and similar things.  The US Army had been using soldiers as guinea pigs for a new antimalarial drug even though they knew it caused brain damage.  They were able to cover it up because the drug had side effects similar to PTSD and it wasn't until it started causing the same effects among nonmilitary.  I forgot the name of the drug but it now has a stiff warning label on it.  During the 40s Johns Hopkins Hospital along with Bristol Squibb Myers pharmaceuticals intentionally infected 773 Guatemalans with infected needles via syringes to test a new drug on them.  When this was brought out into the open about 10 years ago, a big lawsuit ensued.  During the 80s, the CIA was using LSD on mental patients at Vermont State University Hospital as part of a black project to see how pliable the patients were to forced confessions aided by the usage of this hallucinogen.  This was a prelude to what happened afterwards when they started using similar techniques more regularly.  There have been reports that they had expanded usage not just to enemy combatants but also to peaceful protesters (and the development of a device that generates ultrasonic sound that can make a person unconscious from quite a distance away- a directed energy weapon.)  Police militarization is a big issue here and we've heard reports right in NYC about police vans carrying this kind of equipment as well as x-ray equipment that can see inside any vehicles that pass by.  This is all part of the so-called Patriot Act which I refer to as the Act of 1984 lol.  In Memphis TN the police department is being sued by the ACLU for civil rights violations and they've put it upon themselves to drive around and harass the people who initiated the lawsuit and their families by periodically breaking into their homes looking for drugs (in homes where drugs have never been found.)

The funny thing about phones- Blackberry's developers incorporated a fake cell tower detecting phone and it seems like it's become quite popular with our political leaders.  The fake cell phone towers (17 of them were found within the vicinity of our capital) are what intercept conversations and texts.  It was reported the only way to make sure no one can turn your phone on remotely is to take the battery out, and now the latest phones have it so that the battery cant be taken out.

Remember how the telecoms were working with the NSA to assist them in mass surveillance?  We found out later it wasn't just about data collection, they were actually using the data to put people on no-fly lists not based on what they had done but based on their conversations with others, what they liked to read, their political affiliations, etc.

Defense contractors have been developing these and more violent weapons and making a ton of money from them and I see they get used in some pretty horrendous wars too, like the one in Yemen, while the media looks the other way.  Perhaps because the military-industrial complex that Eisenhower warned us about now also includes the media here (I see a lot of military/intelligence commentators on the media nowadays.  The same ones that were lying that there were WMD in Iraq and were lying when they said there was no mass surveillance going on lol.)

Fantastic comparison between AGI and ASI- I think if we do have ASI in our future it will be the result of self-evolving machinery, eventually it will outpace us because by its very nature AI can learn much more quickly than we do.  Think about something as sentient as we are that evolves as quickly as microbes do!  It will be interesting if we can somehow merge that with human consciousness.
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

03 Feb 2019 02:45

How do you feel about universities like M.I.T. developing AI rather than Google?  M.I.T. has been much more open about it and they have a fantastic and futuristic lab.
Being that we still operate within a monetary system, if MIT did make useful AI, then they'd sell the schematics to a company/government to use as they will, provided that use is within the cognitive purpose framework of the machine. Corrupt use may ensue. If this did not happen and it was a non-patented work, then the scientists and engineers would inevitably drift to work in environments wherein their work could be put to nefarious purposes. This is why I said I didn't much care who made the AI - because sooner or later those designs are going to be used by someone with unseemly intentions - unless the AI has some sort of complex moral protocol deeply encoded with affects beyond the physical harm of humans. Ultimately, the tool you use to do harm or help others with doesn't share that sense of morality, it only personifies it. This can change if that 'tool' is self-aware.

Of course I'd love it if the software was made in a open, democratic and progressive environment free of greed or other selfish endeavors - but let's be honest here. The real world is a mix of utopia and dystopia amplified by society. Each will flavor our world, sometimes one or the other hold precedent (think dark ages), but never forever. It all depends how and why we utilize the technology. It's a subjective opinion, because what I think as of an unjust way to use something is my opinion alone regardless if it's shared by others - especially if someone else think what I perceive as unjust is perfectly justifiable in their view.

Ehh, it's all very philosophical and muddy to be honest, but the advances and uses in technology are valid topics for such discussion and will be inevitable in our immediate future.
so-called Patriot Act which I refer to as the Act of 1984
That is indeed ironic...
I think if we do have ASI in our future it will be the result of self-evolving machinery, eventually it will outpace us because by its very nature AI can learn much more quickly than we do.  Think about something as sentient as we are that evolves as quickly as microbes do! It will be interesting if we can somehow merge that with human consciousness.
The possibilities are endless. As for human merging - I think that it is fairly certain in some sense. Cyborgs will be common place in the future (hell, they are already if you consider the actual definition of what a 'cyborg' is), but maybe not so much self-aware androids - at least not until our society becomes a lot more progressive (and has actual working system of humane rights that aren't just liberties and obligations) and things like Uplifting animals is viable. Digitally uploading human minds would be interesting. You could do it just to 'back-up' your conscious in the advent of some sort of trauma (like death), just like you would a computer. Or yes, you could merge multiple minds together - this would in effect make an ASI since you're merging the intellectual abilities of multiple humans (provided it's seamless - otherwise you'll just have someone with multiple personalities). This specific merger would not be a hive-mind in the classical sense though - a human part of such a commune would still have their mental independence for the most part, with privacy. After, you don't 'read' other peoples thoughts if you're technologically merged, you just combine your cognition with the other people like a think-tank or organization would, but without the biological and psychological barriers that make coordination and comprehension difficult. 
Futurum Fusionem
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

03 Feb 2019 04:31

[youtube]PLk8Pm_XBJE[/youtube]
Futurum Fusionem
 
A-L-E-X
Galaxy Architect
Galaxy Architect
Posts: 3498
Joined: 06 Mar 2017 20:19

General global warming / climate change discussion

13 Feb 2019 07:03

Thanks for that amazing video- I do find it interesting how carbon based sentient life and silicon based so called "artificial" life seem to be dovetailing and although they may have a very different evolution, their futures may be merging.  I can think of several benefits to this, to combine the best properties of each into one common being that has a lifespan long enough to explore the entire universe and perhaps one day even outlast it (either by causing the universe to rebang in the far future, journeying to other universes or creating our own universes to explore ourselves.)

What you said about many minds working together to solve problems made me think of something like "crowd sourcing" or even a movie like Artificial Intelligence which used many minds working together to solve the world's so-called unsolvable problems!
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

13 Feb 2019 20:10

 I can think of several benefits to this, to combine the best properties of each into one common being that has a lifespan long enough to explore the entire universe and perhaps one day even outlast it (either by causing the universe to rebang in the far future, journeying to other universes or creating our own universes to explore ourselves.)
A-L-E-X, I replied in the Future of Humanity and Intelligent Life in Universe thread.
Futurum Fusionem
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

06 Mar 2019 03:00

An interesting write up on Vice about the most depressing and forboding climate-change paper ever published: https://www.vice.com/en_us/article/vbwpdb/the-climate-change-paper-so-depressing-its-sending-people-to-therapy

Here is the original paper, if you are interested.
Futurum Fusionem
 
User avatar
Watsisname
Science Officer
Science Officer
Posts: 2320
Joined: 06 Sep 2016 02:33
Location: Bellingham, WA

General global warming / climate change discussion

06 Mar 2019 03:33

This feels more like an opinion piece than what I'd consider a scientific paper.  If people are getting so depressed that they're going into therapy because they read this, I think that's a problem of where they get their information about climate science from.  A reading of synthesis reports or their summaries would probably be more appropriate.  They do not generally make for happy reading, but they shouldn't lead to hopeless depression.

One sure way for the worst effects of climate change to be realized is if the world became convinced that it was an unsolvable problem.  But our best science repeatedly tells us that it is both solvable and economically viable to do so.
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

06 Mar 2019 03:42

This feels more like an opinion piece than what I'd consider a scientific paper.
Yes, the journal's it tried to publish in did not accept it's bleak outlook, and called for more revisions in the material.
Futurum Fusionem
 
User avatar
midtskogen
Star Engineer
Star Engineer
Topic Author
Posts: 1539
Joined: 11 Dec 2016 12:57
Location: Oslo, Norway
Contact:

General global warming / climate change discussion

06 Mar 2019 03:54

The paper is late. Humankind already collapsed in 2012 due to climate change by a 2007 estimate.
NIL DIFFICILE VOLENTI
 
User avatar
Watsisname
Science Officer
Science Officer
Posts: 2320
Joined: 06 Sep 2016 02:33
Location: Bellingham, WA

General global warming / climate change discussion

06 Mar 2019 04:14

Right?   And the climate science community did not support such an estimate then, either.  This is precisely why synthesis reports are so important.  They provide the fastest, easiest way to figure out what are the overall conclusions of climate science research, the evidence behind those conclusions, and the level of agreement between different researchers.
 
User avatar
midtskogen
Star Engineer
Star Engineer
Topic Author
Posts: 1539
Joined: 11 Dec 2016 12:57
Location: Oslo, Norway
Contact:

General global warming / climate change discussion

06 Mar 2019 05:04

In the entire history of humankind not a single doomsayer has been proven right, obviously.  But the belief in doom is surprisingly common.  Must be human nature.

One problem with climate synthesis reports is that they're pretty vague.  The climate is complex.

Speaking of which, there's a new paper out declaring that the  anthropogenic influence in the atmosphere has been detected with 5 sigma confidence.  I'm not sure how much sense such statistical measures make in climate science.  On one side influence is presented as absolutely certain, on the other side the influence is not quantified.
NIL DIFFICILE VOLENTI
 
User avatar
Stellarator
World Builder
World Builder
Posts: 889
Joined: 10 Jul 2018 18:24
Location: Sagittarius A*

General global warming / climate change discussion

06 Mar 2019 05:07

On one side influence is presented as absolutely certain, on the other side the influence is not quantified.
Sounds typical. I guess we'll find out pretty soon how it developes.

Humans always exagerrate what they expect because we operate on timescales much quicker then what is actually happening, even if we caused it. I say to everyone who disagrees with me on that point: We are just frogs slowly boiling in the water.
Futurum Fusionem
 
User avatar
midtskogen
Star Engineer
Star Engineer
Topic Author
Posts: 1539
Joined: 11 Dec 2016 12:57
Location: Oslo, Norway
Contact:

General global warming / climate change discussion

06 Mar 2019 05:33

I guess we'll find out pretty soon how it developes.
Probably not.  This has been said for a few decades, and while the world has become a better place for humans to live in in nearly all respects, calamity is kept pushed into the future without question.
NIL DIFFICILE VOLENTI

Who is online

Users browsing this forum: No registered users and 0 guests