against "AI risk"
Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
- bio/nano-tech disaster
- Malthusian upload scenario
- highly destructive war
- bad memes/philosophies spreading among humans or posthumans and overriding our values
- upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support
Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.
SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)
Comments (89)
Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in a more neglected area.
Not AI risk.
I have studied bio risk (as has Michael Vassar, who has even done some work encouraging the plucking of low-hanging fruit in this area when opportunities arose), and it seems to me that it is both a smaller existential risk than AI, and nowhere near as neglected. Likewise the experts in this survey, my conversations with others expert in the field, and reading their work.
Bio existential risk seems much smaller than bio catastrophic risk (and not terribly high in absolute terms), while AI catastrophic and x-risk seem close in magnitude, and much larger than bio x-risk. Moreover, vastly greater resources go into bio risks, e.g. Bill Gates is interested and taking it up at the Gates Foundation, governments pay attention, and there are more opportunities for learning (early non-extinction bio-threats can mobilize responses to guard against later ones).
This is in part because most folk are about as easily mobilized against catastrophic as existential risks (e.g. Gates thinks that AI x-risk is larger than bio x-risk, but prefers to work on bio rather than AI because he thinks bio catastrophic risk is larger, at least in the medium-term, and more tractable). So if you are especially concerned about x-risk, you should expect bio risk to get more investment than you would put into it (given the opportunity to divert funds to address other x-risks).
Nanotech x-risk would seem to come out of mass-producing weapons that kill survivors of an all out war (which leaves neither side standing), like systems that could replicate in the wild and destroy the niche of primitive humans, really numerous robotic weapons that would hunt down survivors over time, and such like. The FHI survey gives it a lot of weight, but after reading the work of the Foresight Institute and Center for Responsible Nanotechnology (among others) from the last few decades since Drexler's books, I am not very impressed with the magnitude of the x-risk here or the existence of distinctive high-leverage ways to improve outcome around the area, and the Foresight Institute continues to operate in any case (not to mention Eric Drexler visiting FHI this year).
Others disagree (Michael Vassar has worked with the CRN, and Eliezer often names molecular nanotechnology as the x-risk he would move to focus on if he knew that AI was impossible), but that's my take.
This is AI risk. Brain emulations are artificial intelligence by standard definitions, and in articles like Chalmers' "The Singularity: a Philosophical Analysis."
It's hard to destroy all life with a war not involving AI, or the biotech/nanotech mentioned above. The nuclear winter experts have told me that they think x-risk from a global nuclear war is very unlikely conditional on such a war happening, and it doesn't seem that likely.
There are already massive, massive, massive investments in tug-of-war over politics, norms, and values today. Shaping the conditions or timelines for game-changing technologies looks more promising to me than adding a few more voices to those fights. On the other hand, Eliezer has some hopes for education in rationality and critical thinking growing contagiously to shift some of those balances (not as a primary impact, and I am more skeptical). Posthuman value evolution does seem to sensibly fall under "AI risk," and shaping the development and deployment of technologies for posthumanity seems like a leveraged way to affect that.
AI risk again.
Probably some groups with a prophecy of upcoming doom, looking to every thing in the news as a possible manifestation.
Are you including just the extinction of humanity in your definition of x-risk in this comment or are you also counting scenarios resulting in a drastic loss of technological capability?
I expect losses of technological capability to be recovered with high probability.
On what timescale?
I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn't look much brighter than that -- let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can't see why we would prefer the latter to (say) a civilization emerging on Alpha Centauri -- so per the principle of charity I'll just pretend that instead of “Earth-originating intelligent life” he had said “descendants of present-day humans”.
It depends on what you value. I see 3 situations:
If you most value those currently living, that's right, it doesn't make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn't such a disaster.
Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.
Why? This is highly non-obvious. To reach our current technological level, we had to use a lot of non-renewable resources. There's still a lot of coal and oil left, but the remaining coal and oil is harder to reach and much more technologically difficult to reliably use. That trend will only continue. It isn't obvious that if something set the tech level back to say 1600 that we'd have the resources to return to our current technology level.
It's been discussed repeatedly here on Less Wrong, and in many other places. The weight of expert opinion is on recovery, and I think the evidence is strong. Most resources are more accessible in ruined cities than they were in the ground, and more expensive fossil fuels can be substituted for by biomass, hydropower, efficiency, and so forth. It looks like there was a lot of slack in human development, e.g. animal and plant breeding is still delivering good returns after many centuries, humans have been adapting to civilization over the last thousands of years and would continue to become better adapted with a long period of low-fossil fuel near-industrial technology. And for many catastrophes knowledge from the previous civilization would be available to future generations.
Can you give sources for this? I'm particularly interested in the claim about expert opinion, since there doesn't seem to be much discussion in the literature of this. Bostrom has mentioned it, but hasn't come to any detailed conclusion. I'm not aware of anyone else discussing it.
Right. This bit has been discussed on LW before in the context of many raw metals. The particularly good example is aluminum which is resource intensive and technically difficult to refine, but is easy to use once one has a refined a bit. That's been discussed before, and looking around for such discussion I see that you and I discussed that here, but didn't discuss the power issue in general.
I think you are being optimistic about power. Hydropower and biomass while they can exist with minimal technology (and in fact, the first US commercial power plant outside New York was hydroelectric), they both have severe limitations as power methods. Hydroelectric power can only be placed in limited areas, and large-scale grids are infrastructurally difficult and require a lot of technical coordination and know-how . That's why the US grids were separate little grids until pretty late. And using hydroelectric power would further restrict the locations that power can be produced, leading to much more severe inefficiences in the grid (due to long-term power transmission and the like). There's a recent good book, Maggie Koerth Baker's "Before the Lights Go Out". , which discusses the difficulties and complexities in electric grids which also discusses in detail the historical problems with running grids. They are often underestimated.
Similarly, direct biomass is not generally as energy dense as coal or oil. You can't easily use biomass to power trains or airplanes. The technology to make synthetic oil was developed in the 1940s but it is inefficient, technically difficult, and requires a lot of infrastructure.
I also think you are overestimating how much can be done with efficiency at a low tech level. Many of the technologies that can be made more efficient (such as lightbulbs) require a fair bit of technical know-how to use the more efficient version. Thus for example, while fluorescent lights are not much more technically difficult than incandescents, are much more technically difficult.
And efficiency bites you a bit in another direction as well: If your technology is efficient enough, then you don't have as much local demand on the grid, and you don't get the benefits of the economies of scale that you get. This was historically a problem even when incandescent light-bulbs were in use- in the first forty years of electrificiation, the vast majority of electric companies failed.
We're using much more careful and systematic methods of breeding now, and the returns are clearly diminishing- we're not domesticating new crops, just making them marginally more efficient. It is only large returns because the same plants and animals are in such widespread use.
This is true for some catastrophes but not all, and I'm not at all sure it will be true for most.. Most humans have minimal technical know-how beyond their own narrow areas. I'm curious to hear more about how you reach this conclusion.
This may be worth expanding into a discussion post; I can't remember any top-level posts devoted to this topic, and I reckon it's important enough to warrant at least one. Your line of argument seems more plausible to me than CarlShulman's (although that might change if CS can point to specific experts and arguments for why a technological reset could be overcome).
Is there a typo in this sentence?
Yes. Intended to be something like:
Perhaps it's mainly a matter of perceptions, where "AI risk" typically brings to mind a particular doomsday scenario, instead of a spread of possibilities that includes posthuman value drift, which is also not helped by the fact that around here we talk much more about UFAI going FOOM than the other scenarios. Given this, do you think we should perhaps favor phrases like "Singularity-related risks and opportunities" where appropriate?
I have the opposite perception, that "Singularity" is worse than "artificial intelligence." If you want to avoid talking about FOOM, "Singularity" has more connotation of that than AI in my perception.
I'm also not sure exactly what you mean by the "single scenario" getting privileged, or where you would draw the lines. In the Yudkowsky-Hanson debate and elsewhere Eliezer talked about many separate posthuman AIs coordinating to divvy up the universe without giving humanity or humane values a share, about monocultures of seemingly separate AIs with shared values derived from a common ancestor, and so forth. Whole brain emulations coming first, which then invent AIs that race ahead of the WBEs were discussed, and so forth.
I see... I'm not sure what to suggest then. Anyone else have ideas?
I think the scenario that "AI risk" tends to bring to mind is a de novo or brain-inspired AGI (excluding uploads) rapidly destroying human civilization. Here are a couple of recent posts along these lines and using the phrase "AI risk".
"Posthumanity" or "posthuman intelligence" or something of the sort might be an accurate summary of the class of events you have in mind, but it sounds a lot less respectable than "AI". (Though maybe not less respectable than "Singularity"?)
How about "Threats and Opportunities Associated With Profound Sociotechnological Change", and maybe shortened to "future-tech threats and opportunities" in informal use?
Apparently it's also common to not include uploads in the definition of AI. For example, here's Eliezer:
Yeah, there's a distinction between things targeting a broad audience, where people describe WBE as a form of AI, versus some "inside baseball" talk in which it is used to contrast against WBE.
That paper was written for the book "Global Catastrophic Risks" which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer's chapter was the only one talking about AI risks, and he didn't mention the three listed in my post that you consider to be AI risks.
Do you think I've given enough evidence to support the position that many people, when they say or hear "AI risk", is either explicitly thinking of something narrower than your definition of "AI risk", or have not explicitly considered how to define "AI" but is still thinking of a fairly narrow range of scenarios?
Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer's paper and Luke's Facing the Singularity website) and typical discussions on LW would conclude that we're focused on a fairly narrow range of scenarios, which we call "AI risk"?
Yes.
Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff?
What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.
Have you seen Singularity and Friendly AI in the dominant AI textbook?
I'm kind of dubious that you needed 'beware of destroying mankind' in a physics textbook to get Teller to check if nuke can cause thermonuclear ignition in atmosphere or seawater, but if it is there, I guess it won't hurt.
Here's another reason why I don't like "AI risk": it brings to mind analogies like physics catastrophes or astronomical disasters, and lets AI researchers think that their work is ok as long as they have little chance of immediately destroying Earth. But the real problem is how do we build or become a superintelligence that shares our values, and given this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us) is bad, and this includes AI progress that is not immediately dangerous.
ETA: I expanded this comment into a post here.
Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe to gain extra resources to do the task 'sooner' by wallclock time, it seems very dubious that by mere accident it will have proper symbol grounding, won't wirelead (i.e. would privilege the solutions that don't involve just stopping said clock), etc etc. Same goes for other practical AIs, even the evil ones that would e.g. try to take over internet.
You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?
Well let's say in 2022 we have a bunch of tools along the lines of automatic problem solving, unburdened by their own will (not because they were so designed but by simple omission of immense counter productive effort). Someone with a bad idea comes around, downloads some open source software, cobbles together some self propelling 'thing' that is 'vastly superhuman' circa 2012. Keep in mind that we still have our tools that make us 'vastly superhuman' circa 2012 , and i frankly don't see how 'automatic will', for lack of better term, is contributing anything here that would make the fully automated system competitive.
Well, one thing the self-willed superintelligent AI could do is read your writings, form a model of you, and figure out a string of arguments designed to persuade you to give up your own goals in favor of its goals (or just trick you into doing things that further its goals without realizing it). (Or another human with superintelligent tools could do this as well.) Can you ask your "automatic problem solving tools" to solve the problem of defending against this, while not freezing your mind so that you can no longer make genuine moral/philosophical progress? If you can do this, then you've pretty much already solved the FAI problem, and you might as well ask the "tools" to tell you how to build an FAI.
Does agency enable the AI to do so? If not, then why wouldn't a human being not be able to do the same by using the AI in tool mode?
Just make it list equally convincing counter-arguments.
This is actually one of Greg Egan's major objections. That superhuman tools come first and that artificial agency won't make those tools competitive against augmented humans. Further, you can't apply any work done to ensure that an artificial agents is friendly to augmented humans.
I have a few questions, and I apologize if these are too basic:
1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?
2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?
3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities?
Edited to add: thanks for responding!
Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.
Yes.
They spend more time on it, relatively speaking.
Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.
What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?
It depends on the context (probability distribution over number and locations and types of lives), with various complications I didn't want to get into in a short comment.
Here's a different way of phrasing things: if I could trade off probability p1 of increasing the income of everyone alive today (but not providing lasting benefits into the far future) to at least $1,000 per annum with basic Western medicine for control of infectious disease, against probability p2 of a great long-term posthuman future with colonization, I would prefer p2 even if it was many times smaller than p1. Note that those in absolute poverty are a minority of current people, a tiny minority of the people who have lived on Earth so far, their life expectancy is a large fraction of that of the rich, and so forth.
What about takeover by an undesirable singleton? Also, if nanotechnology enables AI or uploads, that's an AI risk, but it might still involve unique considerations we don't usually think to talk about. The opportunities to reduce risk here have to be very small to justify LessWrong's ignoring the topic almost entirely, as it seems to me that it has. The site may well have low-hanging conceptual insights to offer that haven't been covered by CRN or Foresight.
That's a much lower standard than "should Luke make this a focus when trading breadth vs speed in making his document". If people get enthused about that, they're welcome to. I've probably put 50-300 hours (depending on how inclusive a criterion I use for relevant hours) into the topic, and saw diminishing returns. If I overlap with Eric Drexler or such folk at a venue I would inquire, and I would read a novel contribution, but I'm not going to be putting much into it given my alternatives soon.
I agree that it's a lower standard. I didn't mean to endorse Wei's claims in the original post, certainly not based on nanotech alone. If you don't personally think it's worth more of your time to pay attention to nanotech, I'm sure you're right, but it still seems like a collective failure of attention that we haven't talked about it at all. You'd expect some people to have a pre-existing interest. If you ever think it's worth it to further describe the conclusions of those 50-300 hours, I'd certainly be curious.
I'll keep that in mind.
No, but there are lots of cults that say "we are the people to solve all the world's problems." Acknowledging the benefits of Division of Labour is un-cult-like.
I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.
What I endorse:
What I think is unjustified:
I would have to assign a +90% probability to risks from AI, to pose an existential risk, to endorse the second stance. I would further have to be highly confident that we will have to face associated risks within this century and that the model uncertainty associated with my estimates is low.
You might argue that I would endorse the second stance if NASA told me that there was a 20% chance of an asteroid hitting Earth and that they need money to deflect it. I would indeed. But that seems like a completely different scenario to me.
That intuition might stem from the possibility that any estimates regarding risks from AI are very likely to be wrong, whereas in the example case of an asteroid collision one could be much more confident in the 20% estimate. As the latter is based on empirical evidence while the former is inference based and therefore error prone.
What I am saying is that I believe that SI is probably the top charity right now but that it is not as far ahead of other causes as some people here seem to think. I don't think that the evidence allows anyone to claim that trying to mitigate risks from AI is the best one could do and be highly confident about it. I think that it is currently the leading cause, but only slightly. And I am highly skeptical about using the expected value of a galactic civilization to claim otherwise.
Charitable giving in the US in 2010: ~$290,890,000,000
SI's annual budget for 2010: ~$500,000
US Peace Corps volunteers in 2010 (3 years of service in a foreign country for sustenance wages): ~8,655
SI volunteers in 2010 (work from home or California hot spots): like 5?
I am not sure what you are trying to tell me by those numbers. I think that there are a few valid criticisms regarding SI as an organization. It is also not clear that they could usefully spend more than ~$500,000 at this time.
In other words, even if risks from AI was the by far (not just slightly) most important cause, it is not clear that contributing money to SI is better than withholding funds from it it at this point.
If for example they can't usefully spend more money at this point, and there is nothing medium probable that you yourself can do against AI risk right now, then you should move on to the next most important cause that needs funding and support it instead.
Those don't add up.
I think it's funny.
I think you misread "top charity" as "biggest charity" instead of "most important charity".
No, I didn't.
For my part I consider that scenario pretty damn close to the AI-FOOM. ie. It'll quite probably result in a near equivalent outcome but just take slightly longer before it becomes unstoppable.
I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemonstrated by our technology, I would further assert that unfriendly AI is pure science fiction which should be far down the list of our concerns compared to more clear and present dangers.
I'm going to assert that it has something to do with who started the blog.
To me it seems reasonable to focus on self-improving AI instead of wars and nanotechnology. If we get the AI right, then we can give it a task to solve our problems with wars, nanotechnology, et cetera (the "suboptimal singleton" problem is included in "getting the AI right"). One solution will help us with other solutions.
As an analogy, imagine yourself as an intelligent designer of your favorite species. You can choose to give them an upgrade: fast feet, thick fur, improved senses, or human-like brain. Of course you should choose a human-like brain, because this allows them to also fix their problems with feet, fur and senses. Now when you have an opportunity to give them Friendly AI as a next upgrade, you should do it, because it will help them fix many other problems too.
This reasoning does not work if chance to make the Friendly AI is extremely low, and the chances of fixing other problems are much higher. Then it makes sense to fix the other problems first. Important thing is, in long term we want to fix all these problems, so it's not about whether "A" is better than "B", but whether "A, then B" is better than "B, then A".
Personally, I care primarily about AI risk for a few reasons. One is that it is an extremely strong feedback loop. There are other dangerous feedback loops, including nanotech, and I am not confident which will be a problem first. But I think AI is the hardest risk to solve, and also has the most potential for negative utility. I also think that we are relatively close to being able to create AGI.
As far as I know, the SI is defined by its purpose of reducing AI risk. If other risks need long-term work, then each risk needs a dedicated group to work on it.
As for LW, I think it's simply that people read EY's writing on AI risk, and those that agree tend to stick around and discuss it here.
There are two forms of AI in my book and either one contains risk. The learned AI or the AI that comes with complete knowledge. to involve AI in risk assesment you will need the AI in the wilderness with nothing held back. Truly though would you do that to AI? Kind of like shoving all information down the brain of a 13 year old girl. She would just go berserk and will become defiant in the end.
The best alternative safe AI that contains no risk is the copied brain of a scientist.
To whom? In the post you linked, the main source of the concern (google hits) turned out not to mean the thing the author originally thought (edit: this is false. Sorry). Merely "raising the issue" is merely privileging the hypothesis.
Anywho, is the main idea of this post "this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?" I seem to recall their justification for focus on AI was that with self-improving AI, you only need to get it right the first time - one person could eliminate the risk if they could solve the right technical problems. With preventing war or preventing upload labor, on the other hand, you need all or most people to cooperate with you, making the marginal effect of one group smaller.
The post was triggered by a private message from someone, so unfortunately I can't link to it.
Not quite. I'm saying there are a bunch of Singularity-related risks that aren't AI risks, and a bunch of Singularity-related opportunities that aren't AI opportunities. The AI-related opportunities affect the non-AI risks, and the non-AI opportunities affect the AI risks. (For example successfully building FAI would prevent war as much as it prevents UFAI.) We shouldn't be thinking just about AI risks and opportunities at this point, or giving the impression that we are.
The answer to your initial question is that Eliezer and Luke believe that if we create AI, the default result is itkills us all.or does.something else equally unpleasant. And also that creating Friendly AI will be an extraordinarily good thing, in part (and only in part) because it would be excellent protection against other risks.
That said, I think there is a limit to how confident anyone ought to be in that view, and it is worth trying to prepare for other scenarios.
What does "doomsday cult" mean? I had been under the impression that it referred to groups like Heaven's Gate or Family Radio which prophesied a specific end-times scenario, down to the date and time of doomsday.
However, Wikipedia suggests the term originated with John Lofland's research on the Unification Church (the Moonies):
(This is the same When Prophecy Fails that Eliezer cites in Evaporative Cooling of Group Beliefs, by the way. Read the Sequences, folks. Lotsa good stuff in there.)
Wikipedia continues, describing some of the different meanings that "doomsday cult" has held:
So, "doomsday cult" seems to have a lot to do with repeated prophecies of doom, even in the face of past prophecies being overtaken by events. So far as I know, SIAI seems more to err on the side of not making specific predictions, and thus risking running afoul of getting evicted for not paying rent in anticipated experiences, than in giving us a stream of doomish prophecies and telling us to forget about the older ones when they fail to come true.
Reading on:
I haven't been able to get a hold of a greppable copy of Lofland's book. I'd be interested to see how he expands on these seven conditions. Some of them very well may apply, in some form, to our aspiring rationalists ... I wonder to what extent though they apply to aspirants to any group at some ideological variance from mainstream society, though.
In one out of three quoted meanings? It seems to be a relatively unimportant factor to me.
The first, well, anyone raising a concern is going to have that.
Numbers 2 and 3 (religious problem-solving, seekership) are right out.
Number 4 (turning point), okay.
Number 5 (formation of affective bonds)... I dunno, maaybe? I mean, you can't really blame a group for people liking it. I think this was meant way more strongly than we have here.
Number 6 Neutralization of external attachments? Absolutely not.
You didn't name the seventh, unless it's the deprivation, which again... no.
So, arguably 3 out of seven, of which 2 are so common as to be kind of silly, and one of those was a major stretch. Whee.