With a few simple clicks, AnyUnlock bypasses iCloud Activation Lock right away, no Apple ID or password required. State Highway 121 Bypass #4222 Lewisville TX 750. Use iCloud Remover Tool to bypass iCloud activation lock on your iPhone and and generates an activation code specifically for the phone to have no limits.
Hi there, some of you may know me from PTIO/Spite/other matrix servers, I’m friends with cn3m and am one of the people accused of spreading misinformation, etc. Some of you may know me for my “unique” discussion style but either way in this long form post I’m going to keep calm and present my thoughts.
Some background on me, I’m a security researcheengineer and have been investigating platform security for just over 7 years now. I can’t prove that without doxing myself but it should give you a bit of a hint about where my perspective comes from.
Alright so, TAJ talks about four main “anti-privacy armies” they are in order:
I’m going to ignore Brave as I 100% agree with his points, this combined with a massively splintered ecosystem of features means security updates take longer to be delivered to brave. (Sometimes upwards of a month, here’s the Brave CTO saying it’s ”typically about a week” .
Ignoring brave we can start off the list with Apple. This isn’t going to be a post explaining why Apple is the best, etc, I’m only going to address the points TAJ has brought up.
He starts off by linking a Gist by iosecure , I think a lot of the points they bring up are valid, he seems to know what he’s talking about (with regards to Darwin internals, referencing JL’s work.. etc), well researched, but I have some real issues with his TLDR and some of the points he brings up. His argument boils down to 6 main points:
- iOS subliminally and constantly collects sensitive data, links it to hardware identifiers almost guaranteed to link to a real identity
iOS does collect some information, specifically in the activation process, during app store purchases, Apple Pay purchases etc, but to say it “subliminally and constantly” collects sensitive data to me implies that it’s always collecting data, if you start up a phone and MITM the setup process you’ll see some information of concern being sent, such as UUID, IMSI, and so on. What people don’t seem to realise is this isn’t being sent for no reason, it massively screws over the stolen phone market, once you mark it lost or stolen it cannot be activated. In an ideal world sure you’d be able to opt out of this, but it’s a feature that works and I understand why it’s there.
The information that Apple collects is solely related to your usage of applications and services inside it’s ecosystem, they do not store ”sensitive” personal information unless you decide to sync it with iCloud. The information Apple has can be used to follow you around their ecosystem yes, they know what you purchase, what in app purchases you make, Apple Pay payments etc. This information is collected for security and anti fraud purposes, and is a requirement of the issuing card provider (Mastercard in this case).
- iOS forces users to “activate” devices (including non-cellular) which sets up a remote UUID-linked (also collecting registration IP) database for a given device with Apple for APNS/iMessage/FaceTime/Siri, and then Apple ID, iCloud etc. Apple ought be open to users about “activation” and allow users to avoid it.
This I’ll agree with, it does collect your IP address and your device UUID database, I don’t believe this to be a major issue, but yes in an ideal world we could prevent this.
- Apple Activation servers are accessed via Akamai, which means sensitive data may be cached by Akamai and its’ peering partners' which includes many global ISPs and IXPs
This is also true, I don’t believe this presents any inherent risk, Akamai handle the distribution of some of the Apple cloud services, but they’re encrypted with TLS/SSL keys that Akamai do not have. 
- Risk that macOS could be iOS-ified in the near future in the name of “security” while ignoring significant flaws in iOS’ design wrt privacy, forcing users to unnecessarily trust Apple with potentially sensitive data in order to even simply use devices.
This is a valid concern, but not really for the reasons he’s listed. I believe it’s a concern for users who want total control over their device, which is a fair thing to want. Personally I’m okay with giving up the freedom of controlling my system to use something like iPadOS (non jail broken) for the security/privacy it provides, but some users are not, and I understand that. Once again the information you trust them with pertains to their ecosystem, not your usage of that system.
- Controversial, draconian surveillance laws being implemented worldwide which could take advantage of Apple’s data collection and OS design choices, notably in, but not limited to, China, one of Apple's largest markets.
Yeah, this is another valid issue, but not really much we can do about it, Google also provide services to the Chinese market , why aren’t we complaining about them? Yes I’m aware they’re currently censored over there, but there are other large companies that provide services to the chinese markets that we use every day. Typically these systems are set up so they cannot be accessed outside of China to protect the governments interests and access from privacy/security researchers, so there is very little risk a phone will accidentally beacon back to a Chinese government data farm.
- If iOS is to really be considered a secure OS, and if vanilla macOS is to become more secure, independent end-user control must be considered. Increased low-level design security at the cost of control, and the ability to prevent leaking data, cannot be considered a real improvement in security.
iOS is a secure OS, you’ve even said as much in the Gist, your complaints are about privacy not security. There are movements such as #FreeTheSandbox (which I’ll get onto in a second) that want user accessible ways to access your device fully. The sad truth is there’s no real way this can be implemented securely for the end user.
Interestingly Apple do actually provide this in the form of a burned fuse that is set on the manufacturing line before the device is finalised. If this fuse isn’t burned it operates in ”development” mode, which allows downgrading the OS, bypassing checks, modifying boot args. etc.
Alright! Thanks for hanging in there, now we’re done with that Gist we can talk about the points TAJ has brought up.
Yeah I don’t have anything to say about this one, it’s not ideal and is something they keep screwing up annoyingly. I personally don’t use Siri on iOS or macOS but I agree it’s something they need to take more seriously.
Alright this is where things get more interesting, the vulnerability was published by Zuk, he’s an interesting guy. I (and Apple) have no issues with his claim there is some kind of security issue there (however I recently looked into this and couldn’t find a way to even trigger this on a normal phone), the claim that I have issue with is that it was exploited in the wild. This is what apple are “lying” about. I’ve spoken to several engineers and people on the security team, and they’re not lying. There is no evidence that it was exploited in the wild, and the burden of proof is on Zuk to provide that, which he seems to refuse to do.
This just simply isn’t true, if you look at the enterprise signing certificates and the entitlements Apple can provide, nothing there can be used to systematically track a users activity on the system. Some of them are used for things like Enterprise MDM solutions and even then they can’t track what you do on the phone, they can push apps, config profiles, etc, but without a full chain exploit, there’s no way they’re tracking you from that.
Not really sure what this has to do with anything, Apple are a company that do marketing, business research, etc. All companies buy data.
Yeah, that doesn’t surprise me, American law enforcement being dishonest as usual, but I’m not sure what this has to do with Apple, Apple refused to unlock the phone, eventually the FBI managed to get access, I believe this was with a NAND swap that allowed further brute forcing of the PIN, keep in mind this was an iPhone 5 or 5S, I don’t remember specifically, there was no SEP on these devices, which is a hardware enforced lockout for secure processes such as PIN code unlocking.
Also not really sure what this has to do with privacy or security, I absolutely agree Apple do some shitty things to resellers, repair shops, etc. They’re slowly getting better (the new iMacs are covered under their self repair program, etc) but ultimately they make more money that way, I don’t agree with it but I don’t see how it affects the end users privacy/security.
lol brave bad, we agree there
These people are partly joint with the GrapheneOS cult, primarily due to its lead developer orchestrating all these things in hindsight and his followers purposely sharing his opinion garbage as "facts".
I’d like to refer you to rule 1 of this subreddit
Do not be evil. Opinions are welcome, facts more so. Attack arguments, not people. Hating, baiting, trolling, flaming will be dealt with strictly.
I think you calling his opinion garbage is a violation of this rule, but who am I to say ;-)
I started off writing a response to that deleted reddit post, but saw Daniel responded in detail here: https://removeddit.com/firefox/comments/gokcis/_/frh286y/
and Daniel is right, Firefox is less secure than Chrome/Chromium, it’s slowly improving, and as more and more gets ported to rust (hopefully the JS engine too!) I hope that some day I can switch from Chrome to Firefox.
The thing about the “Chrome shills” such as me and cn3m and Daniel is that we don’t actually want to use Chrome, we’d love to use Firefox, but at this point it just doesn’t make any sense for a security conscious person to use Firefox, except for Tor Browser in ”safer” mode, which I think is actually quite good, but not because Firefox is more secure, rather because that safer mode removes half the attack surface of the insecure code in Firefox.
Yeah not even sure what to say there, your argument started off solid, citing good sources, and then slowly it seems like you wanted to pad it out so you had to find more bad things to say about the topics at hand.
- The moderator u trai_dep has taken his time to censor me off completely, so that none of my criticisms can be ever read about his dictatorial moderation and the GrapheneOS discussion I had with its lead developer, who at the end gave me plenty evidence about his rudeness, ironically which was against the rules of the subreddit.
So I read this argument, and it basically seems like you’re wrong.. that’s not a bad thing but I’m not sure why you’re saying you’re “censored” you were given sources and evidence and you refused to accept that.
They don’t make good enough devices, the only Android devices you should trust are Pixel devices. The rest are a mess of splintered ecosystem, massive monolithic kernel patches to add specific features, vendorbloat ware that can be used to exploit you, etc.
- There is also the issue that he always claims Google Pixel 3/3a is a must with Titan M chip running non verifiable code that one has to rely on for Google's claim of being same as open sourced code, and that it does not have spyware. And he maintains his stand about developing the ROM exclusively for the Pixel devices, which also house Pixel Visual Core, a proprietary Google-only CPU+GPU unit independent of the Snapdragon SoC and with negligible documentation claimed "only" to be used for HDR+ camera algorithm processing. Google has had a history of lying with things like the Location History toggle, or their known data collection business and known relationship with NSA.
Daniel talks about the OpenTitan project, as he says you cannot have open source hardware on an ARM SoC, in the future hopefully we’ll move towards RISC-V, but yes you’re correct. You do need to trust the “non verifiable“ code running on the SoC, you know where else you need to trust that? On literally every other phone, smart device, etc. Unless you’re a company with millions of dollars to license it, you’re not going to be given the SoC source. So you raise a valid issue, but it’s not only an issue with Google, it’s an issue with SoC’s in general, as I said, I hope we can move towards RISC-V soon.
As for your last point, I know what you’re referring to and I tried to find the commit, they didn’t “lie“ about it, it was a software bug where the flag wasn’t properly set, it existed for less than 2 days and then was patched out.
Now as for their relation to the NSA, I really don’t want to comment on that, lots of companies do business with government agencies, lots of companies were involved in programs for lawful data access through FISA warrants, regardless of what you think of that process this isn’t an issue specific only to Google, and isn’t an issue with GrapheneOS at all, as all the Google services are removed.
Sorry for any spelling errors, I’m writing this on my iPad at a hotel atm and the spell check for reddit seems to disable after a certain length paragraph. Lame.
Response to TAJ again
This is easier and clearer than splitting up comments, stupid reddit 10k max limit.
To be fair, even Android devices collect this data. So, what was the point of specifically pointing out (or some may say "targeting") Apple?
That was my point.
Now, there are dozens and dozens and dozens of posts by users (now deleted, mostly on privacydue to paranoid users with no constant accounts) claiming how Apple gave them 10-20 MB of takeout data, while Google gives 10-100 GB of takeout data. This creates a very dishonest consensus in privacy community about how Apple collects 100-1000x lesser data.
But they do collect less data, most of Googles business model is advertising, Apples is not.
Looking at Schmidt's report findings, we can clearly see how Apple polls devices lesser than Google for location and other personally identifiable user metrics. But the unintentionally dishonest narrative that got created by some people in the community (either out of lack of knowledge or on purpose) is one of the major reasons why I cited the iosecure's writeup, to counter this false narrative creating a confirmation bias echo chamber around Apple in privacy community.
Can you reword this, I don’t really understand how it’s dishonest if you say it polls devices less than Google for location/personally identifiable user metrics.
Apple collects plenty data, even if it is 10x less than Google, and Apple's takeout not counting in the iCloud videos, images and large files (which is the bulk of Google's massive takeout archives) does not mean Apple is magically a less offending company in privacy terms. My standards for privacy are high so as to not tolerate data collection at all, or the most minimal amounts of it. Apple collects plenty data, and thus is not suitable for privacy, even if it is collecting lesser data than Google.
Yes, they collect some data, but you can disable most of it from the iOS settings, the rest is the data they need to perform their services, device UUID is used for device security/anti theft as well as iMessage activation, etc.
Also that last point is just as lie, it does include iCloud videos, images and large files. I did an export recently..
This is a terrible attitude to live with, and also to even promote this idea of "all companies buy data". This creates a sense of helplessness in the privacy community, where not just trust is fragile, but the mental state of people dealing with paranoia, who have gathered the courage to fight the corporations and governments demolishing their privacy.
You’re actually right, it is, but sadly all companies do, if you have an operation of that scale, selling devices, services, etc, market research is a huge part of that effort. I understand your concerns with the statement but the truth is if you want to be profitable you must understand your customers needs.
We should help build courage instead of telling people "all hope is over, and all privacy seeking hope is false now".
Yes, we should. However sadly the internet as it stands doesn’t provide a way to legitimise the “hope” you may give people. The web itself is just entirely screwed up, people are working on it, but currently unless you want to exclude yourself from 90% of the web, there is no easy way for most people to avoid this.
Apple takes a lot of your personally identifiable information, and this has plenty implications.
Although very minimally subtle and conjectural, it adds up realistically, as it has been observed. Apple intentionally started locking down repairs and hardware, observing your personal credentials, linking them to your previous Apple purchases, which made them jack up repair prices, locked down the hardware to the point that users cannot even get them repaired from third party places.
Yeah, we agree. It’s pretty predatory behavior and evident of a monopoly, but how does this affect user privacy/security? The data is encrypted, Apple cannot access it, despite what that Gist says about iOS lacking “Full disk crypto” this might be true, but APFS’s encryption is battle tested.
Now, this point above seems to be about ethics and financial stuff. What else is there? Citing one of my links cited by OP, and addressed above - Apple themselves were one of the main partners buying data from Facebook.
Again, market research, knowing your customer, etc.
Data collected by Apple does not seem to stay at Apple.
Cite your sources please, where have Apple ever sold data?
Apple likes to buy data from others, and as we know factually, Apple, Google, Facebook and others are basically CIA state extended arms working for NSA together.
I’m not going to dignify that with a response, if they were Apple would have no issue with complying with legal requests to add backdoors or unlock phones, stuff they blatantly have not done.
And thus, they (conjecturally) share data among each other, as they all share data to NSA (factually).
This makes no sense given your conjecture is flawed, also not sure how your implication that Apple buying data means they share it with the NSA. Makes no sense.
This data collection and friendly sharing seems to affect not particularly privacy (even though it seems like it certainly is), but this has extended implications on your financial life and your freedom with your owned Apple electronics.
I didn’t say it didn’t have implications on your financial life and device freedom, I said it doesn’t affect privacy in the way you’ve claimed it does.
Rule 1 also applies not out of context, but strictly considering the context. Out of context judgement is disingenuous and must not be used to support oneself's arguments.
My point was more that you’re angling yourself as this moral bastion, above us all. When you too are guilty of the actions you accuse others of.
The ideas he and his fans propagate is that Firefox is this terrible piece of software created by a free time XDA forum modder,
Firefox may have been created by a free time XDA forum modder (I actually didn’t know that! TIL) but it is not now, it has a company and funding behind it, I don’t think it’s terrible, I think it’s impressive what has been built, I just don’t think there’s any reason for security/privacy conscious individuals to use it over Chromium.
and Chrome is this perfect heavenly thing that is not just unpenetrable,
Nobody said this, I’ve even cited sources in some Matrix arguments showing Chrome is not impenetrable, the differences are the code standard is higher, and the software is properly designed to have good security boundaries.
but even is "10 years ahead of Firefox in security",
IMO it is, but that isn’t saying much.
making it seem like Chrome is some magic potion from the future.
Again no, it’s not magic, it’s just the best we have atm.
These Chrome stans have some crazy bizarre ideas when they start to claim that "Firefox lacks a sandbox at all".
Firefox lacked a sandbox until very recently.. and the sandbox implementation they have right now is.. challenged by some fundamental decisions made earlier in the development of Firefox. I can elaborate in detail on Matrix if you’d like, or I can make a blog post on it.
A lot of this metaphorical pile among other things was debunked in this thread. All people should give the comment train here a read. https://np.reddit.com/netsec/comments/i80uki/theymozilla_killed_entire_threat_management_team/g15kts1/?context=10000
Yes I read through this, I don’t think it was debunked. A lot of the concerns were from earlier on in the projects lifespan however I’d argue the issues are still there, it’s just not as solidly constructed as Chrome/Chromium.
Anyone can call me baseless and ridiculous later, OP included.
We’ll see where the night takes us :-)
If a security researcher refuses to acknowledge basic problems like WebRTC IP leaks and other Google issues, and ignores the privacy argument where the focus indeed is on privacy, there is not much left to say.
Those WebRTC leaks that can be fixed by changing one setting in about:config?
Victimising oneself and claiming oneself as part of and establishing this deck of cards as one group entity seems contradictory to the point OP wants to prove. Weird.
No you misunderstand there, I’m not claiming I’m a victim. I’m saying that we’ve looked at the available options and decided that Chrome/Chromium is the most secure option currently, if this changes I’m happy to reevaluate my thinking. For example when all of Firefox is ported to rust, I will probably switch to Firefox anyways, even if Chrome switches to rust too. Mainly because of the extensibility I get on Firefox. (I miss you tree tabs :-( )
Anyway, when Daniel can claim Firefox and 4chan armies are deployed against him to harass him (a common theme in his replies to just about any of his criticisms on the internet), he would never touch Firefox, as much as use it. Sane can be observed in his or his work GrapheneOS' fans. Moreover, he has what seems to be personal grudges against Mozilla devs, being abusive in his mailing lists. https://lists.torproject.org/pipermail/tor-dev/2019-August/013995.html
Again, I’m not really interested in discussing this, this post should solely be on technical merit. Artist vs Art. However for what it’s worth I do believe Daniel is harassed by 4chan who refuse to believe that closed source software can be as secure as open source software, I guess it’s the many eyes argument, which in itself makes no sense.
The sources, as I repeatedly mention are debunked, and are largely not just conjectural, but I would take it a step further to say there is a sense of confirmation bias and seeking of approval for the products and software they themselves use and want to defend their purchase and/or use of these products. This is a common behavioural pattern observed in corporate loverboys.
This is just untrue, once again I’d like to cite your sources and provide evidence that these claims are debunked, when one of the sources you literally cited says the opposite. Also can we keep the derogatory comments out of it? I think you’ll agree I’ve been nothing but respectful here, I’d appreciate if you did the same.
This is false and weird, as OP also noted my proprietary unverifiable security concerns in the very next quoted paragraph.
It is not false, if you look at pwn2own most of the Android chains that involve non Google devices involve vendor bloatware, the massive logic bug chain comes to mind. My statement is true, the only devices you should trust with your security/privacy are Pixel devices, ideally running GrapheneOS. I’m not here to say they’re perfect, but they’re the best we have.
Having those one or two less pieces of unknown hardware objectively helps reduce lot of attack surface,
Once again we agree, you won’t find that hardware on any other similar phones, unless you’re a Mediatek, Qualcomm, etc, partner.
which is my intended goal here when recommending community to stay away from Pixels. The more standard the hardware, the easier to deal with.
Okay then recommend something better? I’m not saying Pixels are the holy grail, I’m saying they’re the best we’ve got when you account for the security/privacy they provide when running something like GrapheneOS.
We are concerned with Google Pixels with proprietary unverifiable hardware in this case,
Also fair, as mentioned above, please provide an alternative.
with disingenuous transparency on Google's side towards people.
This is an issue with Google itself, GrapheneOS removes all Google components, so I fail to see how this is an issue, if you’re talking about them being disingenuous by not providing the verilog to the Titan M chip then I think you’re using the wrong term.
So, it is logical to make point about Google and NSA relations.
Is it? Again it seems like you’re drawing your own inferences here, I don’t see a need for them.
 - https://www.reddit.com/brave_browsecomments/92ylr0/update_delay_between_chromium_and_brave/e39ch74?utm_source=share&utm_medium=web2x&context=3
 - https://gist.github.com/iosecure/357e724811fe04167332ef54e736670d
 - https://www.ssllabs.com/ssltest/analyze.html?d=albert.apple.com&latest
 - https://en.wikipedia.org/wiki/Google_China
 - https://privacy.apple.com
 - https://en.wikipedia.org/wiki/Dunning–Kruger_effect
 - https://en.wikipedia.org/wiki/Chromium_(web_browser)
 - https://en.wikipedia.org/wiki/Tor_(anonymity_network)#cite_note-tbb-138
 - https://blog.chromium.org/2008/10/new-approach-to-browser-security-google.html