Posts in Category "Uncategorized"

The Adobe Security Team is On the Road This Week

The Adobe Security team will be out in the community providing the latest information about our security initiatives and available to answer your questions at two major conferences this week. Members of our U.S. and European teams will be at the annual OWASP AppSec EU conference May 19 – 22 in Amsterdam. We will be in booth G5 at the conference and will be raffling off a new XBox One gaming system – all of those that stop by our booth are eligible for the raffle. Our U.S. team will also be at the Cloud Security World Conference May 19 – 21 in New Orleans, Louisiana. Abhi Pandit, our Sr. Director for Risk and Assurance, will be speaking at 9 a.m. on Wednesday, May 20th, on the topic of “Who Says Compliance in the Cloud is Just a ‘Documentation Effort?'” We hope that if you’re in New Orleans for this event you’ll take the opportunity to listen to his session. We look forward to meeting as many of you as we can at our events this week.

Adobe @ NullCon Goa 2015

The ASSET team in Noida recently attended NullCon, a well-known Indian conference centered around information security held in Goa. My team and I attended different trainings on client side security, malware analysis, mobile pen-testing & fuzzing, delivered by industry experts in their respective fields. A training I found particularly helpful was one on client-side security by Mario Heiderich. This training revealed several interesting aspects of browser parsing engines. Mario revealed various ways XSS protections can be defeated and how using modern JavaScript frameworks like AngularJS can also expand attack surface. This knowledge can help us build better protective “shields” for web applications.

Out of the two night talks, the one I found most interesting was on the Google fuzzing framework. The speaker, Abhishek Arya, discussed how fuzz testing for Chrome is scaled using a large infrastructure that can be automated to reveal exploitable bugs with the least amount of human intervention. During the main conference, I attended a couple of good talks discussing such topics as the “sandbox paradox”, an attacker’s perspective on ECMA-2015, drone attacks, and the Cuckoo sandbox. James Forshaw‘s talk on sandboxing was of particular interest as it provided useful knowledge on sandboxes that utilize special APIs on the Windows platform that can help make them better. Another beneficial session was by Jurriaan Bremer on Cuckoo sandbox where he demonstrated how his tool can be used to automate analysis on malware samples.

Day 2 started with the keynote sessions from Paul Vixie (Farsight Security) and Katie Moussouris (HackerOne). A couple of us also attended a lock picking workshop. We were given picks for some well-known lock types. We were then walked through the process of how to go about picking those particular locks. We were successful opening quite a few locks. I also played Bug Bash along with Gineesh (Echosign Team) and Abhijeth (IT Team) where we were given live targets to find vulnerabilities. We were successful in finding a couple of critical issues winning our team some nice prize money. :-)

Adobe has been a sponsor of NullCon for several years. At this year’s event, we were seeking suitable candidates for openings on our various security teams. In between talks, we assisted our HR team in the Adobe booth explaining the technical aspects of our jobs to prospective candidates. We were successful in getting many attendees interested in our available positions.

Overall, the conference was a perfect blend of learning, technical discussion, networking, and fun.

 

Vaibhav Gupta
Security Researcher- ASSET

Information about Adobe’s Certification Roadmap now available!

At Adobe, we take the security of your data and digital experiences seriously. To this end, we have implemented a foundational framework of security processes and controls to protect our infrastructure, applications and services and help us comply with a number of industry accepted best practices, standards and certifications. This framework is called the Adobe Common Controls Framework (CCF). One of the goals of CCF is to provide clear guidance to our operations, security and development teams on how to secure our infrastructure and applications. We analyzed the criteria for the most common certifications and found a number of overlaps. We analyzed over 1000 requirements from relevant frameworks and standards and rationalized them down to about 200 Adobe-specific controls.

Today we have released a white paper detailing CCF and how Adobe is using it to help meet the requirements of important standards such as SOC2, ISO, and PCI DSS among others. CCF is a critical component of Adobe’s overall security strategy. We hope this white paper not only educates on how Adobe is working to achieve these industry certifications, but also provides useful knowledge that is beneficial to your own efforts in achieving compliance with regulations and standards affecting your business.

Never Stop Coding

Several members of Adobe’s security team have taken to the media to offer career advice to aspiring security professionals (you can read more about that here, here, and here). For those interested in security researcher positions, my advice is to never stop coding. This is true whether you are working in an entry-level position or are already a senior researcher.

Within the security industry, it has often been said, “It is easier to teach a developer about security than it is to teach a security researcher about development.” This thought can be applied to hiring decisions. Those trained solely in security can be less effective in a development organization for several reasons.

Often, pure security researchers have seen only the fail in the industry. This leads them to assume vulnerable code is always the product of apathetic or unskilled developers. Since they have never attempted large-scale development, they don’t have a robust understanding of the complex challenges in secure code development. A researcher can’t be effective in a development organization if he or she doesn’t have an appreciation of the challenges the person on the other side of the table faces.

The second reason is that people with development backgrounds can give better advice. For instance, when NoSQL databases became popular, people quickly mapped the concept of SQL injection to NoSQL injection. At a high level, they are both databases of information and both accept queries for their information. So both can have injections. Therefore, people were quick to predict that NoSQL injection would quickly become as common as SQL injection. At a high level, that is accurate.

SQL injection is popular because it is a “structured query language,” which means all SQL databases follow the same basic structured format. If you dig into NoSQL databases, you quickly realize that their query formats can vary widely from SQL-esque queries (Cassandra), to JSON-based queries (MongoDB, DynamoDB), to assembly-esque queries (Redis). This means that injection attacks have to be more customized to the target. Although, if you are able to have a coding level discussion with the developers, then you may discover that they are using a database driver which allows them to use traditional SQL queries against a NoSQL database. That could mean that traditional SQL injections are also possible against your NoSQL infrastructure. Security recommendations for a NoSQL environment also have to be more targeted. For instance, prepared statements are available in Cassandra but not in MongoDB. This is all knowledge that you can learn by digging deep into a subject and experimenting with technologies at a developer level.

Lastly, you learn to appreciate how “simple” changes can be more complex than you first imagine. I recently tried to commit some changes to the open-source project, CRITs. While my first commit was functional, I’ve already refactored the code twice in the process of getting it production-ready. The team was absolutely correct in rejecting the changes because the design could be improved. The current version is measurably better than my first rough-sketch proposal. While I don’t like making mistakes in public, these sorts of humbling experiences remind me of the challenges faced by the developers I work with. There can be a fairly large gap between a working design and a good design. This means your “simple recommendation” actually may be quite complex. In the process of trying to commit to the project, I learned a lot more about tools such as MongoDB and Django than I ever would have learned skimming security best practice documentation. That will make me more effective within Adobe when talking to product teams using these tools, since I will better understand their language and concerns. In addition, I am making a contribution to the security community that others may benefit from.

At this point in my career, I am in a senior position, a long way from when I first started over 15 years ago as a developer. However, I still try to find time for coding projects to keep my skills sharp and my knowledge up-to-date. If you look at the people leading the industry at companies such as Google, Etsy, iSec Partners, etc., many are respected because they are also keeping their hands on the keyboards and are speaking from direct knowledge. They not only provide research but also tools to empower others. Whether you are a recent grad or a senior researcher, never lose sight of the code, where it all starts.

Peleus Uhley
Lead Security Strategist

More Effective Threat Modeling

There are a lot of theories about threat models. Their utility often depends on the context and the job to which they are applied. I was asked to speak about threat models at the recent BSIMM Community Conference, which made me formally re-evaluate my thoughts on the matter. Over the years I’ve used threat models in many ways at both the conceptual level and at the application level. In preparing for the conference I first tried to deconstruct the purpose of threat models. Then I looked at the ways I’ve implemented their intent.

Taking a step back to examine their value with respect to any risk situation, you examine things such as who, what, how, when, and why:

Who is the entity conducting the attack, including nation states, organized crime, and activists.

What is the ultimate target of the attack, such as credit card data.

How is the method by which attackers will get to the data, such as SQL injection.

Why captures the reason the target is important to the attacker. Does the data have monetary value? Or are you just a pool of resources an attacker can leverage in pursuit of other goals?

A threat can be described as who will target what, using how in order to achieve why.

We will come back to when in a moment. Threat models typically put most of the focus on what and how. The implicit assumption is that it doesn’t really matter who or why—your focus is on stopping the attack. Focusing on what and how allows you to identify potential bugs that will crop up in the design, regardless of who might be conducting the attack and their motivation.

The challenge with focusing solely on what and how is that they change over time. How is dependent on the specifics of the implementation, which will change as it grows. On the other hand, who and why tend to be fairly constant. Sometimes, focusing on who and why can lead to new ideas for overall mitigations that can protect you better than the point fixes identified by how.

For instance, we knew that attackers using advanced persistent threat (APT) (who) were fuzzing (how) Flash Player (what). To look at the problem from a different angle, we decided to stop and ask why. It wasn’t solely because of Flash Player’s ubiquity. At the time, most Flash Player attacks were being delivered via Office documents. Attackers were focusing on Flash Player because they could embed it in an Office document to conduct targeted spearphishing attacks. Targeted spearphishing is a valuable attack method because you can directly access a specific target with minimal exposure. By adding a Flash Player warning dialogue to alert users of a potential spearphishing attempt in Office, we addressed why Flash Player was of value to them. After that simple mitigation was added, the number of zero-day attacks dropped noticeably.

I also mentioned that when could be useful. Most people think of threat models as a tool for the design phase. However, threat models can also be used in developing incident response plans. You can take any given risk and consider, “When this mitigation fails or is bypassed, we will respond by…”

Therefore, having a threat model for an application can be extremely useful in controlling both high-level (who/why) and low-level threats (how/what). That said, the reality is that many companies have moved away from traditional threat models. Keeping a threat model up-to-date can be a lot of effort in a rapid development environment. Adam Shostack covered many of the common issues with this in his blog post, The Trouble with Threat Modeling. The question each team faces is how to achieve the value of threat modeling using a more scalable method.

Unfortunately, there is not a one-size-fits-all solution to this problem. For the teams I have worked with, my approach has been to try and keep the spirit of threat modeling but be flexible on the implementation. Threat models can also have different focuses, as Shostack describes in his blog post, Reinvigorate your Threat Modeling Process. To cover all the variants would be too involved for a single post, but here are three general suggestions:

  1. There should be a general high-level threat model for the overall application. This high-level model ensures everyone is headed in the same direction, and it can be updated as needed for major changes to the application. A high-level threat model can be good for sharing with customers, for helping new hires to understand the security design of the application, and as a reference for the security team.
  2. Threat models don’t have to be documented in the traditional threat model format. The traditional format is very clear and organized, but it can also be complex and difficult to document in different tools. The goal of a threat model is to document risks and plans to address them. For individual features, this can be in a simple paragraph form that everyone can understand. Even writing, “this feature has no security implications,” is informative.
  3. Put the information where developers are most likely to find it. For instance, adding a security section to the spec using the simplified format suggested eliminates the need to cross-reference a separate document, helping to ensure that everyone involved will read the security information. The information could also be captured in the user story for the feature. If your code is the documentation, see if your Javadoc tool supports custom tags. If so, you could encourage your developers to use an @security tag when documenting code. If you follow Behavior Driven Development, your threat model can be captured as Cucumber test assertions. Getting this specific means the developer won’t always have the complete picture of how the control fits into the overall design. However, it is important for them to know that the documentation note is there for a specific security reason. If the developer has questions, the security champion can always help them cross-reference it to the overall threat model.

Overall I think the concept of threat modeling still serves a valid purpose. Examining how and what can ensure your implementation is sound, and you can also identify higher level mitigations by examining who and why. The traditional approach to threat modeling may not be the right fit for modern teams, though teams can achieve the goal if they are creative with their implementation of the concept. Along with our development processes, threat modeling must also evolve.

Peleus Uhley
Lead Security Strategist

Join Us at CSA EMEA Congress November 19 – 20!

Adobe will be participating again this year in the Cloud Security Alliance (CSA) EMEA Congress event in Rome, Italy, November 19 – 20, 2014. This conference attracts senior decision makers in IT Security from a wide range of industries and governmental organizations. This event focuses on regulatory, compliance, governance, and technical security issues facing both cloud service providers and users of cloud services. We’re excited to be back at what promises to be another great event this year.

I will be presenting a keynote session entitled “Security Roadmaps and Dashboards, Oh My!” on Thursday, November 20th, at 9:40 a.m. A “good” security roadmap is going to come from an ear-to-the-ground approach to security across all teams. It should also reflect current security industry trends. This is essential in creating a multi-faceted, balanced security roadmap that actually drives teams to build security into everything they do. How do you build and keep a solid, adaptable security roadmap in place? By focusing on the right metrics to measure success against the roadmap and developing meaningful dashboards to communicate progress and success to management. This presentation will discuss how Adobe tackled this problem across its very large product, service, and I.T. organization and provide insights into how you might tackle this problem in your own organization. I will also be available in our booth to answer questions after the session.

Please make sure to follow @AdobeSecurity on Twitter for the latest happenings during CSA EMEA Congress as we will be live tweeting during the event – look for the hashtag #AdobeCSA.

 

David Lenoe

Director, Product Security

Looking Back at the Grace Hopper Celebration

As someone new to the Grace Hopper Celebration (GHC), I was excited and overwhelmed on realizing there were around 8000 women from more than 60 countries. I had the opportunity to meet some really interesting people from within and outside of Adobe.

The keynote by Shafi Goldwasser (winner of the 2012 ACM Turing award) was especially interesting. She discussed cryptography and the varied, seemingly paradoxical solutions it can help us achieve. Highlighting the need to store data privately in the cloud with the ability to simultaneously harness that data to solve problems (e.g. research in medicine), she emphasized that the “magic of cryptography” as the key to this, and spoke at some length on looking at problems through the “cryptographic lens.”

Dr. Arati Prabhakar’s (Dir of DARPA) keynote during the award ceremonies was very inspiring. She talked about the benefits military research has provided to areas like the Internet, material sciences and safer warfare, and talked about further research into new areas, such as producing new materials and chemicals and rethinking complex military systems. She even showed the audience a video of a robotic arm being controlled by a quadriplegic woman hooked up to a computer.

The majority of presentations I attended were related to security, where I met smart and motivated women working in the security field, and a lot of students interested in security. The talks varied from Lorrie Cranor’s talk on analyzing and storing passwords safely, to a panel discussion integration of security in SDLC (panelists included Justine Osborne, Leigh Honeywell and Parisa Tabriz) to homomorphic encryption and its future uses (Mariana Raykova and Giselle Font). Other talks ranged from security fundamentals and cryptography aimed at college students to more “hot topics” like wearable technology, biometrics, cloud computing and HCI.

I also helped out at the career fair, and met a lot of undergraduates interested in working with Adobe. It was fun talking with them about what I do and learning about what they were interested in, including two students Adobe had sponsored to attend GHC this year. I met a number of industry professionals as well as students at talks and events who are working on including more girls and women in tech through outreach programs, hackathons and mentoring. It was refreshing to see a few men attending the GHC too.

The theme of the GHC this year was “Everyone, Everywhere.” It was a very inclusive environment, and apart from the talks there were events to make our evenings fun- ice breakers and dances. The long list of impressive speakers, motivating panelists and encouraging mentors/organizations were all very accessible and inspiring. I had a great time at GHC and I hope more people (men and women!) get to attend the conference in the future.

Devika Yeragudipati
ASSET Security Researcher

The Simplest Form of Secure Coding

Within the security industry, we often run into situations where providing immediate security guidance isn’t straightforward. For instance, a team may be using a new, cutting edge language that doesn’t have many existing security tools or guidelines available. If you are a small startup, then you may not have the budget for the enterprise security solutions. In large organizations, the process of migrating a newly acquired team into your existing tools and trainings may take several weeks. What advice can we give to those teams to get them down the road to security today?

In these situations, I remind them to go back to their original developer training. Many developers are familiar with the term “technical debt which refers to the “eventual consequences of poor system design, software architecture or software development within a codebase.” Technical security debt is one component of an application’s overall technical debt. The higher the technical debt is for an application, the greater the chance for security issues. Moreover, it’s much easier to integrate security tools and techniques into code that has been developed with solid processes.

To a certain extent, the industry has known this for awhile. Developers like prepared statements because pre-compiled code runs faster, and security people like it because pre-compiled code is less injectable. Developers want exception handling because it makes the web application more stable and they can cleanly direct users to a support page which is a better user experience. Security people want exception handling so that there is a plan for malicious input and because showing the stack trace is an information leak. However, let’s take this a step further.

If you search the web for “Top 10″ lists for developer best practices and/or common coding mistakes, then you will see there’s a clear overlap in traditional coding principles and security principles across all languages.  For example, the Modern IE  Cross Browser Development Standards & Interoperability Best Practices is written for developers and justifies these points using concepts that are important to clean development. However, I can go through the same list and justify many of their points using security concepts. Here are just a few of their recommendations and how they relate to security:

  • Use a build process with tools to check for errors and minify files. On the security side, establishing this will enable you to more easily integrate security tools into the build process.
  • Always use a standards-based doctype to avoid Quirks Mode. Quirks Mode makes it easier to inject XSS vulnerabilities into your site.
  • Avoid inlineJavaScript tags in HTML markup. In security, avoiding inlineJavaScript makes it easier to support Content Security Policies.

Switching to Ruby on Rails, here’s a list of the 10 Most Common Rails Mistakes and how to avoid them so developers can create better applications. When you look at those errors from a security perspective, you will also see overlaps:

  • Common Mistake #1-3: Putting too much logic in the controller/view/model. These three points all deal with keeping your code cleanly designed for better maintainability. Security is a common reason for performing code maintenance. Often times, your response to active attacks against your system will be slowed down because the code cannot be easily changed or is too complex to identify a single validation point.

    This section also reminds us that the controller is the recommended place for first-level session and request parameter management. This allows for high-level sanity checking before the data makes it into your model.

  •   Common Mistake #5: Using too many gems. Controlling your third-party libraries also helps to limit your attack surface and reduce the maintenance costs of keeping them up-to-date for security vulnerabilities.
  •   Common Mistake #7: Lack of automated tests. As mentioned in the HTML lists, using an automated test framework enables you to also include security tests. This blog refers to use techniques such as BDD for which there are also Ruby-based BDD security frameworks like Gauntlt.
  •  Common Mistake #10: Checking sensitive information into source code repositories. This is clearly a security rule. In this example, they are referring to a specific issue with Rails secret tokens. However, this is a common mistake for development in general. Separating credentials from code is simply good coding hygiene – it can prevent an unintended leak of the credential and permit a credential rotation without having to rebuild the application.

Even if you go back to a 2007 article in the IBM WebSphere Developer Technical Journal on The Top Java EE Best Practices, which is described as a “best-of-the-best list of what we feel are the most important and significant best practices for Java EE,” then you will see the same themes being echoed within the first five principles of the list:

  •   Always use MVC.This was also mentioned in the Rails Top 10. Centralized development allows for centralized validation.
  •  Don’t reinvent the wheel. This is true for security, as well. For instance, don’t invent your own cryptography library wheel!
  •  Apply automated unit tests and test harnesses at every layer. Again, this will make it easier to include security tests.
  •  Develop to the specifications, not the application server.This point highlights the importance of not locking your code into a specific version of the server. One of the most frequent issues large enterprises struggle with is migrating from older, vulnerable platforms, because the code is too dependent on the old environment.This concept is also related to #16 in their list, “Plan for version updates”.
  • Plan for using Java EE security from Day One. The idea here is similar to “Don’t reinvent the wheel.” Most development platforms provide security frameworks that are already tested and ready to use.

As you can see, regardless of your coding language, security best practices tend to overlap with your developer best practices. Following them will either directly make your code more secure or make it easier to integrate security controls later. In meetings with management, developers and security people can be aligned in requesting adequate time to code properly.

It’s true that security-specific training will always be necessary for topics such as authentication, authorization, cryptography, etc. And security training certainly shows you how to think about your code defensively which will help with application logic errors. However, a lot of the low-hanging bugs and security issues can be caught by following basic good, old-fashioned coding best practices. The more you can control your overall technical debt, the more you will control your security debt.

Peleus Uhley
Lead Security Strategist

Building Relationships and Learning at Black Hat and DEF CON

Adobe attends Black Hat in Las Vegas each year and this year was no exception. The Adobe security team as well as several security champions from Adobe’s product teams attended Black Hat and a few stayed on for DEF CON too. What follows is the experiences and takeaways of Rajat and Karthik security researchers on ASSET, from Black Hat and DEF CON 2014.

Security is often characterized as a dichotomy between “breaking” and “building”. Presentations at Black Hat and DEF CON are no exception – focused on these categories as a result of the approach that hackers take towards their research. For example, Charlie Miller and Chris Valasek’s, “A Survey of Remote Automotive Attack Surfaces” was a memorable talk in the breaking-security category, where they disassembled the onboard computers in over twenty commercial cars and analyzed ways to remotely control them. It was refreshing to take a step back and observe that security scrutiny can be brought to bear on all engineering design, not just software design.

In the building-security category, we appreciated the format of the various roundtables at Black Hat because they mirrored many of the themes of security conversations across Adobe. For example we found the roundtable discussions on API Security  and Continuous Integration and Deployment to be valuable lessons for our researchers and security champions. At DEF CON, we came across DemonSaw, a new tool that lets you securely share files in a peer-to-peer network without requiring cloud storage. We found it to be an impressive implementation of cryptography fundamentals to meet security and privacy.

We noticed the gradual shift in focus of the talks from last year, in that more hackers are going after hosted services and mobile/embedded applications. This gave Adobe security champions the opportunity to see how hackers adapt to changes in the industry and to get an attacker’s perspective on compromising applications that may be similar to our own. Often times security champions had to strike a balance between talks that apply to their day-to-day work, like Alex Stamos’ Building Safe Systems at Scale and talks that were interesting given the impact to the industry, for example the talk about BadUSB. We also saw the recurring theme that each year the security community finds more serious vulnerabilities than the last, as a result of new products and platforms flooding the market. It was a reminder that with the universal growth of technology there’s a need for deeper investment in security. 

BH party

 Adobe-hosted  event at the Cosmopolitan’s Chandelier Bar on August 7th.

Black Hat and DEF CON offer much more than the presentations and trainings. The Black Hat Arsenal showcased cutting-edge security research, with prototypes of packet-capturing drones and tools that harvest information from various embedded devices. Most of the tools on display were open-source and it was great to see research shared in the security community. The Vendor Expo was an expansive mix of large companies promoting their product suites, along with newcomers exploring niche problems such as log mining, threat intelligence, and biometric security. No DEF CON conference is complete without a Capture the Flag (CTF) event, which is a place for professionals–or hobbyists–to build their skills and compete with each other in solving real-world challenges related to forensics and Web exploitation – this year’s competition was won by PPP.

It was evident that Black Hat and DEF CON have steadily grown in popularity. For the first time at Black Hat we were standing in line to enter briefings. The size and scale of these events keep increasing, which is a testament to the expanding influence of security in technology and business. Despite the growth, the atmosphere at Black Hat and DEF CON remains collegial. Meeting and talking with people about the challenges we all face always makes for a valuable learning experience.

Karthik Raman, Security Researcher
Rajat Shah, Security Researcher

 

 

 

View of an Internship with ASSET

I technically joined the security community last year when I began my Master’s in Information Security at Carnegie Mellon University. I gained a lot of theoretical and practical knowledge from the program, but my internship with ASSET gave me a totally new perspective on how security in a large organization works. I worked on multiple projects over the summer in the beautiful city of San Francisco. I have outlined one of them below.

Adobe follows a Secure Product Lifecycle (SPLC).To cater to the large number of current and future Adobe products, the security guidance provided to the teams by ASSET needs to be scalable. Scalability requires automation, or else the number of security researchers and their time becomes a bottleneck. Security guidance is also intended to focus on the configuration of the projects. For example, a Web service written in Java that handles confidential information requires a very different set of guidelines to follow as compared to an Android application.

For such targeted guidance, we use a smart system called SD Elements. For SD Elements, I performed a gap-analysis on security recommendations of Android and iOS apps as well as on desktop and rich-client applications. I researched quite a bit in the process. Some of my sources included the CERT guidelines for securing applications, internal pen-test reports, and a lot of academic research papers and vendor reports. Adobe has now moved to cloud deployment for many of their products: Creative Cloud and Marketing Cloud are prime examples. To support this recent momentum, I also expanded the deployment phase in SD Elements which is a set of guidelines for DevOps teams to securely deploy and maintain their applications in the cloud.

During my internship, I worked with Mohit Kalra who was my manager and Karthik Raman, my mentor. They were always available to guide me whenever I got stuck on a problem and always gave me specific Adobe context. My other team-members were also very helpful and considerate throughout the internship and they always made me feel at home. As part of Adobe Be Involved month, I also got a chance to volunteer at Edgewood Center for Children and Families, which was a humbling experience. We played kickball with the kids and it was really great to see smiles on their faces.

Mayur blog post

Volunteer picture from Edgewood Center for Children and Families. (I’m the guy in bottom left.)

As a result of my internship at Adobe, I feel like I’ve really improved my technical knowledge and my understanding of how security works within an organization. Thanks, Adobe.

Mayur Sharma
Security Intern