
Woman suffers severe burns in a chemical attack at Georgia park
A Georgia woman is being treated for severe burns after someone poured a corrosive chemical onto her head at a public park in Savannah
abcnews.go.com
Discover curated articles, in-depth tutorials, and expert guides on technology, development, AI, and more.

A Georgia woman is being treated for severe burns after someone poured a corrosive chemical onto her head at a public park in Savannah
abcnews.go.com

A group of more than 150 parents sent a letter on Friday to New York governor Kathy Hochul, urging her to sign the Responsible AI Safety and Education (RAISE) Act without changes. The RAISE Act is a buzzy bill that would require developers of large AI models - like Meta, OpenAI, Deepseek, and Google - […]
theverge.com

For a long time, my development life existed within the predictable world of my local machine. I wrote code, it ran, and that was the extent of my world. Few Months ago, I had chance to step outside of my comfort zone and dive into the world of Open Source. If I had to describe the feeling of that first moment, I would point to a specific scene from the Disney movie, "Ralph Breaks the Internet". _ picture of movie "Rack it Ralph" first time Ralph and Vanellope walks into the world of internet Just like Ralph and Vanellope stood on that balcony, gazing wide-eyed at the endless, futuristic skyline, I felt completely small. In the movie, the Internet is described as a sprawling, infinite metropolis—bustling with flying vehicles, and towering skyscrapers representing the giants of the web. Coming from the quiet, controlled environment of my local machine, the Open Source ecosystem felt like that futuristic city. The towering buildings weren't Amazon or Google, but massive repositories with millions of lines of code. The flying cars weren't just traffic. They were the large stream of Pull Requests, Issues, and Discussions happening in real-time across the world. People were continuously building, rebuilding, breaking and fixing the projects. It was terrifying, yes. But just like Ralph looking out at that horizon, I realized the potential of this limitless world. My Contribution Highlights Driven by this excitement, I didn't want to just be a tourist in this new city. It was intimidating, but I am incredibly proud to say that I have successfully contributed to some of the foundational pillars of the Python data ecosystem. I have had PRs merged into: Scikit-learn NumPy Pandas Dagster Seeing my code become part of tools that millions of developers rely on was a exciting experience. Why I Fell for Dagster This realization explains why I fell so deeply for Dagster. While exploring it, I got amazed by their core philosophy of Software-defined Assets. The concept of treating data not just as a byproduct, but as a first-class asset was very interesting. Treating data as assets shifts the focus from managing execution tasks to maintaining the freshness of the actual data products. This approach automatically generates clear lineage graphs, allowing you to easily understand dependencies and track how data flows through the system. As a result, debugging and collaboration become significantly more efficient because you are interacting with defined data outcomes rather than abstract code logic. Reading the Dagster source code didn't feel like studying. I found myself mentally visualizing the entire process like how the data flows, how the assets are materialized, and how the engine handles dependencies. Simulating these complex data journeys in my head was incredibly fun and engaging. Stepping out of my local machine and jumping into the open-source world brought lots of changes. It helped me realize my passion toward data management system. This was fantastic and fun experience and I will be continuing this journey.
dev.to

As the XRPL moves toward permissionless programmability, starting with Smart Escrows, it needs a secure, reliable, and high-performance engine to run custom developer code. The conclusion we came to is that that engine should be WebAssembly, often abbreviated as WASM. Earlier this year, the RippleX Programmability team surveyed various virtual machine (VM) options, and concluded that WASM was the best choice for the XRPL ecosystem for a few different reasons. Read more here: https://dev.to/ripplexdev/a-survey-of-vms-for-xrpl-programmability-eoa WASM is not exclusive to the blockchain world. It’s a universal, open standard that was originally designed to run high-performance applications in web browsers (though now it is used in many other contexts as well). It is intended to support any language on any operating system, and in practice most languages have some level of support. https://webassembly.org/ WASM-based smart contract runtimes are deterministic, secure, and portable. The entire system relies on WebAssembly's core promise of deterministic execution, meaning the code will run identically on all rippled nodes, regardless of operating system or hardware, to ensure consensus is maintained. In addition, WASM promises better performance, and supports many general-purpose programming languages, like Rust, C, and Go (in other words, a Web2 developer may not have to learn a new language). With those benefits, WASM is the most popular smart contract language choice of many of the newer blockchain projects, such as Polkadot, Cosmos, Near, and most recently Soroban on Stellar. Host Functions To securely access ledger data and improve the efficiency of computation-intensive tasks, the WASM code relies on Host Functions. Think of a Host Function as an internal API call: it is expressed outside the WASM code (in the efficient C++ code that runs the XRPL) and allows the WASM program to securely query data from the ledger state. In EVM terms, this is roughly equivalent to precompiles. The fundamental rule for the WASM code in Smart Escrows is read-only access, with only very specific write access allowed. WASM code has read-only access to all ledger objects and a variety of other on-chain data (such as ledger header information). It only has write access to the Data field in the Escrow that it is attached to. This strict limitation ensures that the custom logic cannot negatively affect the integrity of the ledger or the balances of other accounts. WASM runtime environments are low-level virtual stack machines, like JVM, and can be embedded into any host application (such as rippled). There are several different implementations, with various tradeoffs. We chose Wasmi due to its performance and history of use in other blockchains (Polkadot and Solana also use Wasmi). https://dev.to/ripplexdev/xrpl-programmability-wasm-runtime-revisit-2ak0 In summary, WASM is the secure, high-performance virtual machine that executes the custom release logic for Smart Escrows. It allows developers to deploy complex, conditional rules using familiar programming languages, all while operating within carefully guarded boundaries that ensure the security and stability the XRPL is known for.
dev.to

Introduction A commitment device is a mechanism for "blocking your escape routes in advance so your future self won't slack off." No matter how important a task or project may be, we sometimes find ourselves picking up our phone or drifting into unproductive habits, postponing what needs to be done. This isn't a matter of weak willpower or lack of discipline. Humans are naturally wired to behave this way. That’s exactly why relying only on willpower is unrealistic. Instead, it’s more practical to predict the actions that could cause future disadvantages and lightly block those escape routes ahead of time. Setting up environments and rules that make it harder for your future self to slack off — that is the essence of a commitment device. Examples of Commitment Devices By now, you may have a rough idea of what kinds of things can serve as commitment devices. Here are some representative examples. For instance, the following are all valid commitment devices: Not keeping snacks at home = Preventing the unwanted behavior of unnecessary snacking in the first place Leaving your phone in a different room = Physically blocking the behavior of reaching for your phone when you don't need it Going for a walk without your wallet = Preventing impulse purchases you might regret later Making your desk a "work-only space" = Establishing a rule that "when I sit here, I work," which naturally encourages focus None of these require special tools or expensive equipment. What matters is accepting the premise that your future self is vulnerable to temptation. Once you acknowledge the habits you want to avoid, even the smallest adjustments can function as commitment devices. Tip: Why Do Commitment Devices Work? Let’s briefly look into why commitment devices are so effective. First, humans have a strong instinct to choose the easiest option. As a result, we tend to prioritize immediate comfort over long-term benefits. This tendency is known as present bias. On top of that, daily life is filled with countless decisions. These accumulate and cause decision fatigue, gradually draining our willpower. "I shouldn’t eat this, but I want to." "I should keep working, but I want to check social media." Even though these internal conflicts aren't actions themselves, they silently consume mental energy in the background. Eventually, in the middle of work, a small thought like "Maybe I’ll just check my phone for a minute…" quickly snowballs into procrastination. That’s why: → Removing the temptation from your sight altogether is far more effective for conserving willpower and increasing success. This is the core principle that makes commitment devices powerful. Other Types of Commitment Devices (Alternative Approaches) So far, the commitment devices we explored were basic forms such as "removing" or "restricting" access. However, there are also more forceful types that rely on external factors. Using Money as a Commitment This approach uses the psychology of "I paid for it, so I should not waste it." Examples include: Charging yourself a penalty if you fail to reach a goal Paying for an online language course upfront Signing a yearly gym contract Humans naturally avoid losses. This loss aversion makes financial commitments a strong motivator. Public Accountability Many people find that being observed by others drastically increases their ability to stick with a behavior. The risk of embarrassment or losing credibility makes it harder to break the commitment. Examples include: Announcing on social media that you will post something daily Reporting progress regularly to friends or coworkers Setting these as external rules helps boost consistency and success rates. As we’ve seen, commitment devices are not limited to simply "removing" or "restricting" temptations. Using external costs or social pressure can also be extremely effective in shaping behavior. And the variety doesn’t end there. By combining environmental adjustments, financial stakes, and social accountability, you can create systems that make controlling your behavior much easier. What matters most is understanding which types work best for you and implementing them in a way that doesn’t feel forced. Small, consistent adjustments in your daily life can lead to surprisingly meaningful changes. Tip: Caution Points and Practical Tips Commitment devices are highly effective, but they’re not perfect. There are a few things to keep in mind: If the restriction is too strong, it may cause a rebound. Striving for perfection makes continuation harder. If the purpose is unclear, the commitment becomes mere hardship. The key is adjusting your rules to a level that feels naturally manageable. This alone can dramatically increase your consistency. So how can you implement commitment devices in a sustainable way? Here are some practical and easy-to-apply tips. Start with small restrictions No need to create big rules right away. Try something like "leave your phone in another room for one hour." Low-effort beginnings help build early success. Set “exception rules” For example: One free day per week No rules during travel These soft boundaries prevent burnout and make long-term continuation easier. Some commitments also work well as time-limited challenges such as "just for two weeks" or "only this month." Conclusion Finally, I’d like to share the personal reason that inspired this article. Recently, I caught a mild illness while also going through a move, and my usual routines changed drastically. I had a habit of drinking only on weekends and playing games at night, but I stopped drinking for health reasons. Since unpacking after the move felt exhausting, and I wasn’t in great condition, I naturally stopped playing games too. These were small forms of relaxation for me — not obsessions. But by coincidence, multiple circumstances overlapped, and I ended up not doing them at all. Surprisingly, this accidental restriction led to a noticeable increase in productivity. Of course, since illness and moving were the triggers, this situation is not a strict commitment device. However, the effect — "unintentionally restricting behaviors that were not helpful" — was essentially the same. In the end, regardless of the reason, recognizing the behaviors you should avoid and creating distance from them is often the first step toward meaningful change. And intentionally creating these conditions, instead of leaving them to chance, is what commitment devices are all about. By incorporating small rules or environmental adjustments into your daily life, you can improve your behavior in a way that feels natural and effortless. Through my own experience, I was reminded just how powerful these small structures can be. Everyone has a completely different lifestyle and situation. But if reading this article made you think of even one habit you’d like to improve, I encourage you to try making a small adjustment. Commitment devices don’t require complex systems or perfect plans. Even simple, everyday tweaks can be surprisingly effective. A tiny step today may end up helping your future self far more than you expect. Thank you for reading!
dev.to
King Charles says his cancer treatment to be 'reduced'
nbcnews.com

Britain's King Charles III said in a televised announcement that, thanks to early detection and intervention, his treatment for prostate cancer will be reduced in the new year. The king's message was in support of the Stand Up to Cancer charity campaign. Charles disclosed his cancer diagnosis in February 2024, less than 18 months after taking the throne.
nbcnews.com

A24's "Marty Supreme" marketing campaign underscores the creative ways studios have been trying to get people to theaters.
nbcnews.com

The long take, the unbroken tracking shot, "the oner" - whatever you want to call it, filmmakers agree that it's one of the most difficult technical achievements in cinema. It's a feat of creativity, but also great coordination and choreography when a single, tiny mistake can ruin a shot. Some famous examples: the casino scene […]
theverge.com

case 1 : Reverse an array using another array class Main { public static void main(String[] args) { int [] num = {10,11,12,13,14,15,16}; System.out.println("Original Array :"); for(int i=0;i<num.length;i++){ System.out.print(num[i]+" "); } //Create a result array to hold the required values, having the same length as num. int [] result = new int[num.length]; // reverse the array here for(int i=num.length-1,j=0;i>=0;i--){ result[j++]=num[i]; } System.out.println(); System.out.println("Reversed Array :"); // print the resultant array for(int i=0;i<result.length;i++){ System.out.print(result[i]+" "); } } } case 2 : Reverse an array in-place without extra space A palindrome number is a number which remains the same when it is reversed. class Main { public static void main(String[] args) { int [] num = {10,11,12,13,14,15,16}; System.out.println("Original Array :"); for(int i=0;i<num.length;i++){ System.out.print(num[i]+" "); } System.out.println(); int left =0; int right = num.length-1; while(left < right){ num[left] = (num[left]+num[right])-(num[right]=num[left]); left++; right--; } System.out.println("Reversed Array :"); for(int i=0;i<num.length;i++){ System.out.print(num[i]+" "); } } } case 3 : Check whether the given string is a palindrome or not class Main { public static void main(String[] args) { String str = "abcdcba"; int left =0; int right=str.length()-1; boolean flag = true; while(left < right){ if(str.charAt(left)!=str.charAt(right)){ flag=false; break; } left++; right--; } if(flag==true){ System.out.println(str+" is palindrome number"); } else{ System.out.println(str+" is not palindrome number"); } } }
dev.to
Let's face it, it's easy to fixate on the big gifts that crowd around the Christmas tree. However, we'd argue that the true treasures are the small, useful, and thoughtful gifts tucked within stockings. That's why, for this guide, we've pooled together a bunch of tried-and-tested gadgets and goods to help bolster someone's everyday carry […]
theverge.com
PaveLaunch - Launch and Discover Products PaveLaunch is the premier destination for creators to showcase their latest inventions and for tech enthusiasts to explore cutting-edge innovations. pavelaunch.com
dev.to
Fed chair says buying a home unlikely to become easier soon
nbcnews.com

A British mother holding an infant, a Ukrainian refugee, the wife of a Navy veteran, a German man about to celebrate his first wedding anniversary.
nbcnews.com

Britain’s King Charles III has revealed that his cancer treatment will soon be scaled back, crediting an “early diagnosis, effective intervention and adherence to doctors’ orders” for an improvement in his condition.
nbcnews.com

US AI policy news today features a flurry of government action across multiple fronts. Policymakers are scrambling to build America’s AI advantage while setting guardrails – almost like trying to catch a rocket after it has launched. In recent months, the US government has rolled out executive orders, new initiatives, and legislation around artificial intelligence, aiming to stay competitive without ignoring safety. The picture is complex: some actions aim to sprint ahead on innovation, while others emphasize caution and risk management. US AI Policy Report Card: Leadership vs Caution Federal AI policy remains very much a work in progress. The US has no single AI law; instead it relies on a patchwork of executive actions and guidelines. For example, in January 2025 the Trump administration issued an executive order titled “Removing Barriers to American Leadership in AI” (Source: Federal Register). This order explicitly rescinded many of President Biden’s previous AI directives and told agencies to eliminate rules seen as hindering innovation. In July 2025, the White House then published America’s AI Action Plan, a comprehensive strategy listing over 90 federal initiatives to boost U.S. AI development and leadership. By contrast, the Biden administration’s earlier approach emphasized managing AI risks while investing in infrastructure. In October 2023, President Biden signed an order on Safe, Secure, and Trustworthy AI (EO 14110) to promote ethical development. Then in January 2025, he issued an order on Advancing U.S. Leadership in AI Infrastructure. That 2025 order declares the US must build its own AI data centers and clean-energy power to lead the global race. It sets goals like modernizing energy and computing infrastructure. These swings reflect different philosophies. Experts warn that deregulating AI alone won’t automatically deliver great results. Arati Prabhakar and Asad Ramzanali note that we need government-led R&D to solve big problems (like rare diseases or education), not just unregulated Chabot’s. In their words, “we need clear-eyed action to harness AI’s benefits,” not merely letting tech companies run wild. Major Federal Initiatives and Bills In November 2025, the Trump White House launched the “Genesis Mission” – a nationwide project explicitly compared to the Manhattan Project. This executive order tasks the Department of Energy with creating an integrated AI research platform using the nation’s vast federal science datasets. The aim is a national R&D push that accelerates breakthroughs in energy, healthcare, national security, and more. Meanwhile, on the legislative side, Congress is considering new bills to build an AI-ready government workforce. One example is the AI Talent Act (introduced Dec 2025) to help federal agencies recruit and retain top AI experts. This bipartisan proposal (by Rep. Sara Jacobs and Sen. Andy Kim) would create specialized talent teams and streamlined hiring tools. “The United States can’t fully deliver on its national security mission, lead in responsible AI, and compete in the AI race if our federal agencies don’t have the talent to meet this moment,” Rep. Jacobs warned. In defense and security, AI skills are being added to training. The FY2026 defense authorization included the AI Training for National Security Act, requiring the Pentagon to add AI and cyber-threat content to basic training for troops and civilian staff. As Rep. Rick Larsen noted, “Artificial intelligence is rapidly changing the national security threat landscape”. These steps ensure our military and agencies develop the expertise to handle AI-driven challenges. • Executive Orders: Biden’s 2023-2025 orders focused on safety and infrastructure; Trump’s 2025 orders pivot to boosting innovation and R&D. • Congressional Legislation: The National AI Initiative Act (2020) funds R&D; new proposals like the AI Talent Act and NDAA provisions strengthen the AI workforce. • R&D Funding: Significant new programs at DOE, NSF, and under the CHIPS Act are channeling billions into AI compute and research. • Agency Guidance: FTC, Commerce, and other agencies have released guidelines on AI fairness, privacy, and safety; federal hiring and ethics policies are being updated. Overall, federal strategy today mixes aggressive investment in innovation (like the AI Action Plan) with selective oversight signals (like the Safe AI EO). Analysts note this means US companies largely operate under existing laws, adapting voluntarily rather than facing brand-new AI-specific rules. But with dozens of new initiatives, the US government is clearly upping its AI game. State vs. Federal: A Patchwork Landscape With no national AI law, states have rushed in. As of late 2025, over 45 states considered AI legislation and about 31 enacted some regulations. Colorado, for example, passed the nation’s first AI bias law for “high-risk” systems (like hiring and lending), and California has dozens of pending AI bills on content labeling, deepfakes, data privacy, and more. These state actions cover areas from consumer protection to employment to education. This patchwork prompted the Trump administration to intervene. In December 2025, President Trump announced he would sign an executive order blocking state AI regulations. “There must be only one rulebook if we are going to continue to lead in AI,” he said. Critics argue this deregulatory push could let tech companies evade accountability for harm, while supporters say it avoids a confusing array of 50 different laws. South Dakota’s Attorney General even said he fully supports the state’s ability to impose “reasonable” AI regulations. • Federal stance: Voluntary guidelines and agency enforcement (FTC, DoC, etc.), no sweeping AI law yet. • State activity: A mosaic of laws on bias, privacy, content labeling, etc. (Colorado’s AI Act, California proposals, etc.). • Tension: Trump’s proposed order would override state AI rules. This drew pushback – South Dakota’s AG insists states must retain the right to impose “reasonable” AI regulations. In everyday terms, it’s as if we wrote 50 separate rulebooks for AI (one per state) and are now debating whether a single unified manual would be simpler. Industry and Emerging Voices These policy shifts are unfolding alongside rapid industry changes. For example, AMD has been landing major AI contracts and building next-generation AI supercomputers, pushing its data center revenue way up. While AMD’s rise is primarily a business story, it ties into national strategy: US policy favors a strong domestic AI hardware base. In the software world, companies like OpenAI, Google, and Microsoft continuously update their AI offerings (e.g. Copilot tools) and often lobby on regulations. Public and expert voices are also loud. Many surveys show Americans are excited about AI’s potential but worried about issues like bias or job loss. Regulators often seem to be patching leaks while AI surges ahead. Still, agencies like the FTC have vowed to use existing laws to police AI. For instance, the FTC will pursue unfair AI practices (bias, scams, privacy abuse) under current statutes. Think tanks and researchers even issue “AI policy report cards” to grade government progress. The key is to focus on credible news, since AI policy ultimately affects everyone – from tech entrepreneurs to everyday citizens. Looking Ahead: Future of AI Policy So, where do we go from here? More action is likely in 2026 and beyond. Expect new congressional proposals (like data privacy or technology bills) and agencies refining AI guidelines. States will keep proposing laws unless federal clarity arrives. Internationally, the US will engage in AI diplomacy at forums like the G7 and OECD, helping shape global norms. In short, AI policy will stay dynamic. By keeping up with each new executive order, rulemaking, or bipartisan report, readers can track how tomorrow’s technology landscape is being shaped today. Frequently Asked Questions (FAQs) 1. How is AI used in the U.S. military? The Department of Defense launched GenAI.mil, integrating Google Cloud’s Gemini to support both defense operations and administrative tasks. 2. Are U.S. agencies using AI for public services? Several federal agencies, including HHS and Medicare, are expanding AI in administration and healthcare, sparking both innovation and debate. 3. What is America’s AI Action Plan? The AI Action Plan outlines pillars to accelerate innovation, build AI infrastructure, and lead global AI policy and security efforts. 4. Does U.S. AI policy address bias and safety? Federal policy encourages voluntary safety and fairness standards but also shifts away from earlier Biden-era protections, focusing on innovation. 5. What federal laws exist for AI in the U.S.? There is no single AI law; Congress has introduced acts like the TAKE IT DOWN Act on deepfakes and proposals like the CREATE AI Act, but broad regulation is still developing. 6. Could AI regulation impact AI stock markets? News about AI policy shifts—like chip export decisions or federal regulation—often moves markets and influences AI-related stocks. (General trend reflected in market coverage.) 7. How does U.S. AI policy compare globally? Unlike the EU’s detailed AI Act, U.S. policy relies on executive actions and voluntary standards focused on innovation rather than strict mandates. (Trend visible in comparison to EU policies.) Conclusion US AI policy news today shows a country racing to lead global AI development while reshaping how innovation, safety, and national security work together. With new federal executive orders, major shifts in chip export rules, and upcoming nationwide AI regulations, the U.S. is clearly moving toward a unified strategy that strengthens innovation and reduces fragmented state-by-state laws. These actions aim to protect American competitiveness, support domestic AI talent, and build the next wave of secure and responsible AI systems. For U.S. readers, the key takeaway is simple: AI policy will affect everything from jobs to healthcare to national security. Staying informed helps businesses prepare, helps developers build responsibly, and helps citizens understand how AI will shape daily life. As the U.S. finalizes its 2025–2026 AI roadmap, the country’s choices today will determine how strong—and how safe—America’s AI future becomes.
dev.to

With the last 2 blogs, we understood a lot about application architecture patterns and how isolation of business logic and the code is an essential element at an organisational level. A small decision wrongly taken before building the application can lead to inconsistencies later in the application stage, where a change can be too costly(both in time and money) to incur. But since we're talking about Frontend Architecture patterns, it's very essential to understand how the entire page gets delivered and updated to the user too, So going forward, we'll discuss more about application-level architectures for web products like: SPAs (React, Vue, Angular — client-rendered) BFF (Backend for Frontend) SSG/ISR/SSR (Next.js, Nuxt, Astro) Frontend Monoliths Microfrontends In this blog, we'll learn about how SPA(Single Page Applications) came into the picture and what did they bring to the table. Before SPAs, there were MPAs or Multi Page Applications. These, as the name suggests, were Multi because of the way they rendered pages. Every interaction rendered a new HTML page, reloading the old screen. These were mostly server-controlled applications (thin client) with very minimal JS dependency, but very slow UX. To resolve this issue of entire page reload, AJAX was born. AJAX stands for Asynchronous JavaScript and XML. It introduced the XMLHttpRequest (XHR) API, letting pages request data without reloading. This is called partial updates. The concept of Promises and callbacks was born, later which came to be known as fetch(). But as apps grew, thus grew the need of a faster JS execution speed. Doing DOM manipulations and complex UI logic on slower JS engines was painful. And also maintaining apps with so many XHR calls and DOM changes turned out be a challenge. Thus came, probably one of the most revolutionary moment in web development history, Google's V8 engine. V8, at a high level: introduced JIT(Just In Time) compilation, which compiled JS code to native machine code instead of line-by-line interpretation implemented optimisations like hidden classes, inline caching, and a modern garbage collector This also enabled the creation of Node.js (2009) and huge server/client ecosystem growth. Here's a video on JS Engine internals and the V8's architecture in detail. These pieces along with some others like client-side routing(history API), concept of virtual DOM and componentization of modules, bundlers together produced single page capabilities: no reload on navigation, client-rendered UIs and modular code. With all these benefits, SPAs can be considered as a landmark in frontend engineering and quite honestly gave rise to the role of a frontend engineer. There is an option for a lot more interactivity and complex engineering ideas to be implemented on the web and thus came the, quite infamous, rise of dashboards and SaaS. And this in turn, was adopted and some groundbreaking libraries were created by top product companies. The evolution continues — SPAs didn’t end the story; they changed it...
dev.to

Understanding Growth in a High-Pressure Business Environment Organizations today face increasing pressure to grow while remaining adaptable. Market conditions shift rapidly, technology evolves constantly, and customer expectations continue to rise. In this environment, growth cannot be accidental. It must be intentional, structured, and aligned with a clear direction. Corporate strategy defines that direction, while business development ensures progress toward it. Skyler Bloom emphasize that long-term success depends on connecting thoughtful planning with disciplined execution. Strategy as the Framework for Long-Term Direction Corporate strategy provides the framework that guides organizational decisions. It defines where the company intends to compete, how it plans to differentiate itself, and what priorities deserve the greatest focus. Without this structure, organizations risk reacting to short-term pressures rather than pursuing sustainable growth. A strong strategy also establishes clarity across teams. Employees understand how their roles contribute to broader objectives, and leaders can evaluate opportunities more consistently. Strategic alignment reduces confusion and supports accountability, enabling organizations to move forward with confidence even during periods of uncertainty. Core Elements That Support Strategic Clarity Several foundational elements contribute to effective corporate strategy. A clearly articulated mission and vision define purpose and long-term ambition. Competitive positioning clarifies how the organization intends to stand apart within its industry. Portfolio planning ensures that resources are distributed wisely across initiatives, balancing innovation with stability. Resource allocation determines how time, capital, and talent are invested to support strategic priorities. When these elements work together, organizations gain a roadmap that supports disciplined decision making. This approach reflects the strategic thinking often associated with Skyler Bloom, who promotes clarity and intentionality in planning. Business Development as the Driver of Execution While strategy defines intent, business development drives execution. This function translates strategic goals into tangible actions by identifying opportunities that align with the organization’s direction. Business development operates at the intersection of planning and the market, connecting internal goals with external realities. Teams involved in business development actively analyze trends, customer needs, and competitive dynamics. Their role is not simply to pursue growth, but to pursue the right kind of growth. By filtering opportunities through a strategic lens, business development ensures that expansion efforts support long-term objectives. Responsibilities That Enable Business Development Success Business development includes several interconnected responsibilities. Opportunity identification allows organizations to anticipate emerging needs and respond proactively. Partnership development builds relationships that extend capabilities, whether through distribution, technology integration, or collaboration. Negotiation and deal structuring translate opportunity into formal agreements. Market expansion initiatives support entry into new regions or customer segments. When these responsibilities are guided by strategy, business development becomes a powerful engine for sustainable growth. This disciplined approach aligns with the principles emphasized by Skyler Bloom, who advocates for opportunity evaluation grounded in strategic purpose. Why Alignment Between Strategy and Development Is Critical Strategy and business development often operate at different levels of an organization. Strategy is typically shaped by leadership teams focused on long-term vision, while business development engages directly with markets and partners. Without alignment, initiatives can drift away from organizational priorities. Alignment improves efficiency and effectiveness. Teams focus on opportunities that reinforce the company’s mission. Resources are allocated more effectively, and decision making becomes faster and more consistent. Communication across departments improves, reducing duplication and increasing organizational coherence. Organizations that achieve this alignment gain a competitive advantage. Growth efforts reinforce strategic direction rather than compete with it. This integrated approach is frequently highlighted by Skyler Bloom as a foundation for resilience and adaptability. A Practical Example of Strategic Alignment in Action Consider a company seeking to expand through digital channels. Leadership establishes a strategy focused on improving customer experience and scalability. This strategic vision sets the direction, but business development must bring it to life. Business development teams may identify technology partners capable of supporting digital platforms. Strategic acquisitions could accelerate entry into new segments. Logistics partnerships may improve fulfillment efficiency, while subscription models create recurring revenue. Each initiative aligns with the original strategy. Together, they transform vision into measurable outcomes, demonstrating how coordination between planning and execution drives meaningful growth. Measuring Progress and Maintaining Focus To ensure alignment remains effective, organizations rely on performance indicators that reflect both strategic intent and execution quality. These metrics help leaders evaluate progress and refine priorities. Common measures include revenue generated from new initiatives, the success of strategic partnerships, and the speed of launching new products or services. Customer acquisition and retention rates provide insight into market response. Another important indicator is strategic fit, assessing whether business development efforts clearly support long-term objectives. Regular evaluation allows organizations to remain focused while adapting to change, maintaining balance between flexibility and discipline. Overcoming Common Alignment Challenges Maintaining alignment between strategy and business development is not without challenges. Organizational silos can limit communication and slow collaboration. Encouraging cross-functional engagement helps address this issue. Short-term pressure can also disrupt alignment. Companies may pursue immediate gains that conflict with long-term goals. Balancing quick wins with sustained investment is essential for lasting success. Market volatility adds complexity, requiring organizations to remain flexible without abandoning strategic focus. Organizations that navigate these challenges effectively strengthen both planning and execution, building resilience in uncertain environments. Building a Model for Sustainable Growth When corporate strategy and business development operate in harmony, organizations gain the ability to grow with purpose. Strategy provides direction and discipline, while business development delivers momentum and results. Neither function succeeds alone. Leaders such as Skyler Bloom encourage organizations to view these functions as interconnected forces rather than separate processes. By aligning vision with action, companies position themselves to innovate, adapt, and compete effectively. This relationship allows organizations to transform insight into achievement. When strategy guides development and development reinforces strategy, growth becomes intentional, sustainable, and enduring.
dev.to
![[Boost]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D800%252Cheight%3D%252Cfit%3Dscale-down%252Cgravity%3Dauto%252Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Fuser%252Fprofile_image%252F542815%252F631ceb54-7173-4e48-95d3-cf8a0de32eab.jpg&w=3840&q=75)
Build a YouTube Live Clone with Next.js, Clerk, and TailwindCSS - Part One Oluwabusayo Jacobs ・ Dec 12 #webdev #react #javascript #programming
dev.to

Today, Apple officially released iOS 26.2 for iPhone 11 and newer devices, which includes new Lock Screen customizations for you to adjust the opacity level, as well as the ability to use AirDrop with people who aren't in your contacts by sharing a one-time code with them instead. There are also new updates going out […]
theverge.com

Fired Michigan football coach appears in court
nbcnews.com

After announcing new Matter-compatible smart home devices and stylish wireless speakers over the past few months, Ikea continues to expand its consumer electronics offerings with three new wireless chargers. They're limited to 15W Qi2.0 charging rates but are well priced and feature designs that look more like home decor accents than traditional tech accessories. The […]
theverge.com
![[Boost]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D800%252Cheight%3D%252Cfit%3Dscale-down%252Cgravity%3Dauto%252Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Fuser%252Fprofile_image%252F1001560%252F509cc314-7327-47d2-9795-b82ad92ea2c1.jpg&w=3840&q=75)
Bitwise Operations: A Simplified Guide for Beginners Stephen Gbolagade ・ Jun 28 '23 #bitwise #programming #binary #computerscience
dev.to

INDIANAPOLIS — As the redistricting battle began to pick up steam in Indiana last month, state Sen.
nbcnews.com

Security teams don’t have a CVE problem — they have a prioritization problem. CVSS tells us severity. EPSS tells us likelihood of exploitation. But defenders still end up asking: “Which CVEs do I actually fix first?” To explore that gap, I built Day0Predictor v0.1 — a defensive, transparent CVE risk scoring tool that integrates EPSS signals with interpretable machine learning. This is not a zero-day detector and not a scanner. It’s a prioritization signal designed to be auditable and explainable. 🔍 What Day0Predictor Does Combines EPSS score + percentile Adds structured threshold features (≥0.01, ≥0.10, ≥0.50) Trains a lightweight, interpretable model Outputs: Risk score (0–100) Features used Reasons for the score Clear disclaimers No black box. No hype. 🧠 Why EPSS Alone Isn’t Enough EPSS is powerful, but in practice: Scores fluctuate daily Context is missing (attack patterns, structure) Defenders still need explanation Day0Predictor treats EPSS as strong evidence, not truth. Think of it as: EPSS + structure + explainability 🧪 Example Output { "cve_id": "CVE-2021-44228", "risk": 98, "mode": "trained_model_epss", "features": { "epss": 0.94358, "percentile": 0.99957, "epss_ge_050": 1.0 }, "reasons": [ { "feature": "epss", "direction": "up" }, { "feature": "percentile", "direction": "up" } ] } This is the kind of output defenders can audit and trust. 🛠️ CLI Usage Score a CVE directly by ID using EPSS: day0predict score-epss \ --cve-id CVE-2021-44228 \ --model models/day0predict.joblib \ --format json You can also score CVE JSON files directly. 📊 Model Notes Logistic regression (intentionally simple) Handles class imbalance ROC-AUC ≈ 0.92 Explainability prioritized over complexity This tool is meant to support human judgment, not replace it. 📦 Open Source GitHub: 👉 [https://github.com/ethicals7s/day0predictor-v0.1] (https://github.com/ethicals7s/day0predictor-v0.1) MIT licensed. Feedback and PRs welcome. 🔮 What’s Next Ideas for v0.2: Time-aware training (train on past → predict future) Explicit CISA KEV features Lightweight web demo Expanded text feature analysis 🧠 Final Thought Security doesn’t need more hype tools. It needs boring, honest, defensible signals that help humans decide what matters now. That’s what I tried to build with Day0Predictor.
dev.to

Hollywood director Carl Erik Rinsch was convicted of scamming $11 million from Netflix to spend on luxury items, including five Rolls-Royces and a Ferrari, as reported earlier by Deadline. A New York jury found Rinsch guilty of several charges on Thursday, including fraud and money laundering. Rinsch, who's known for directing 47 Ronin, was charged […]
theverge.com

Cuando pensamos en Serverless y AWS Lambda, nuestra mente suele irse automáticamente a lenguajes interpretados: Python, Typescript. Son geniales, productivos y fáciles de editar en la consola. Pero, ¿son la única opción? Definitivamente no. Y más importante aún: ¿Son siempre la mejor opción? Tampoco. Hoy vamos a romper el mito de que las Lambdas son solo para scripts ligeros y vamos a ver cómo C++ entra en juego para ofrecernos un rendimiento brutal y tiempos de ejecución predecibles. Para ello, analizaremos la pieza clave que a menudo asusta a los desarrolladores: el sistema de construcción con CMake. La Lambda no es una caja negra: El Rol del Runtime A veces vemos a AWS como una caja mágica donde subes código y "simplemente funciona". Pero AWS es ingeniería pura, y entenderla nos da poder. Las Lambdas no ejecutan tu código por arte de magia. Necesitan un Runtime. En Python o Node, AWS te da el runtime pre-cocinado. Pero en C++, Go o Rust, tú puedes controlar ese entorno. ¿Qué hace realmente el Runtime? No es más que un bucle infinito (un loop) que hace peticiones HTTP a una API interna de AWS. 1. Pregunta: "¿Hay trabajo nuevo?" 2. Si sí: Ejecuta tu función. 3. Envía la respuesta de vuelta a AWS. 4. Repite. Al usar C++, no estamos "hackeando" Lambda; estamos usando el Custom Runtime API. Y para no tener que escribir ese bucle HTTP nosotros mismos, usamos la librería aws-lambda-runtime. El Código: Simpleza en C++ Para demostrar que esto no es ciencia de cohetes, mira este main.cpp. Es todo lo que necesitas para una Lambda funcional: #include <aws/lambda-runtime/runtime.h> using namespace aws::lambda_runtime; // Tu lógica de negocio va aquí invocation_response my_handler(invocation_request const& request) { return invocation_response::success("Hello, World!", "application/json"); } int main() { // Aquí inicia el bucle infinito del Runtime que mencionamos antes run_handler(my_handler); return 0; } ¿Ves el run_handler? Ese es el puente. Ese es el código que conecta tu función my_handler con la infraestructura de Amazon. El Arquitecto: Explicando el CMakeLists.txt Aquí es donde muchos se detienen. C++ requiere compilación, y en el mundo de AWS, necesitamos empaquetar todo (binario + dependencias) en un archivo .zip. Afortunadamente, el SDK de C++ para Lambda nos hace la vida fácil. Analicemos el archivo de configuración línea por línea: cmake_minimum_required(VERSION 3.5) set(CMAKE_CXX_STANDARD 11) project(hello LANGUAGES CXX) find_package(aws-lambda-runtime REQUIRED) add_executable(${PROJECT_NAME} "main.cpp") target_link_libraries(${PROJECT_NAME} PUBLIC AWS::aws-lambda-runtime) aws_lambda_package_target(${PROJECT_NAME}) El desglose paso a paso: Configuración Básica: cmake_minimum_required(VERSION 3.5) set(CMAKE_CXX_STANDARD 11) project(hello LANGUAGES CXX) Nada fuera de lo común aquí. Definimos el proyecto y establecemos C++11 como estándar. Encontrando el "Pegamento" (El Runtime): find_package(aws-lambda-runtime REQUIRED) Esta es la línea crítica. Le dice a CMake: "Busca en el sistema la librería aws-lambda-runtime". Ojo: Para que esto funcione, debes haber instalado previamente el AWS Lambda C++ Runtime en tu entorno de compilación (o en tu contenedor Docker de CI/CD). Esta librería contiene la lógica del bucle de eventos. Creando el Ejecutable: add_executable(${PROJECT_NAME} "main.cpp") Compila nuestro main.cpp y crea un binario llamado hello (el nombre del proyecto). El Linkeo (La conexión): target_link_libraries(${PROJECT_NAME} PUBLIC AWS::aws-lambda-runtime) Aquí es donde ocurre la magia del enlazado. Unimos nuestro código con la librería de AWS. Esto inyecta toda la funcionalidad necesaria para que run_handler se comunique con la API de Lambda. El Empaquetado Automático: aws_lambda_package_target(${PROJECT_NAME}) Esta es la joya de la corona. Esta función no es de CMake estándar; es una utilidad que provee la librería de AWS. ¿Qué hace? Toma tu ejecutable. Busca las dependencias compartidas necesarias. Lo comprime todo en un hello.zip listo para subir a la consola de AWS o desplegar vía Terraform/CDK. Te ahorra escribir scripts de bash manuales para hacer el zip. Asi se ve la configuración Reflexión: AWS te hace productivo, no ciego Este ejemplo de C++ nos enseña algo valioso sobre la filosofía de AWS. A menudo, la "nube" se siente como una abstracción que nos quita control a cambio de comodidad. Pero herramientas como el Runtime API y este SDK de C++ demuestran que AWS no es una caja negra hermética. Nos dan las herramientas para ser productivos (como aws_lambda_package_target que automatiza el zip), pero nos dejan la puerta abierta para bajar al nivel del sistema operativo, gestionar la memoria manualmente y optimizar cada milisegundo de ejecución si nuestro negocio lo requiere. Usar C++ en Lambda no es solo por "velocidad"; es por tener el control total de lo que sucede en tu infraestructura, pagando solo por los milisegundos que realmente usas.
dev.to

Fired Michigan football coach Sherrone Moore appeared virtually in court and is charged with home invasion, stalking and breaking and entering. NBC News' Shaquille Brewster reports on the charges, what prosecutors had to say about Moore's actions and what comes next.
nbcnews.com

Large Language Models (LLMs) changed the world — but Retrieval-Augmented Generation (RAG) is what makes them truly useful in real-world applications. Today, I'm excited to introduce Sanjeevani AI, our RAG-powered intelligent chat system designed to deliver accurate, context-aware, Ayurvedic-backed health insights. It’s fast, reliable, domain-specialized, and most importantly — built for real end-users who need clarity, not hallucinations. In this article, I’ll break down: Why RAG is becoming the backbone of modern AI systems How RAG boosts accuracy, reliability, and trust How we built and optimized Sanjeevani AI The real-world impact on users Why RAG-based systems are the future The Problem with Standard LLMs: Hallucinations & Inconsistency LLMs like GPT, Claude, and LLaMA are incredibly powerful — but they have one big flaw: They don’t know what they don’t know. When an LLM lacks domain-specific information (health, finance, law, agriculture, etc.), it tries to “guess.” And that guess often results in hallucinations — wrong answers delivered with total confidence. In a domain like healthcare, hallucinations are unacceptable. This is where Retrieval-Augmented Generation (RAG) becomes a game-changer. What RAG Actually Does RAG makes LLMs smarter by connecting them to an external knowledge base. Here’s the simple workflow: User asks a question → System retrieves relevant documents from a verified dataset → The LLM uses those documents to produce an answer → The result is factual, grounded, and context-accurate No guessing. No hallucinating. No generic responses. RAG turns an LLM into a domain expert, even if it wasn’t trained on that domain originally. This idea is so powerful that almost every modern AI company — from OpenAI to Meta — is now pushing RAG-based systems. Introducing Sanjeevani AI — A RAG-Powered Health Companion Sanjeevani AI is our AI system built to empower users with safe, reliable, and personalized health information rooted in Ayurveda and modern wellness science. *What makes Sanjeevani AI unique? * Uses RAG for domain-accurate responses Powered by vector embeddings + semantic search Integrates LLMs for natural conversation Built with a curated Ayurvedic knowledge base Supports symptom-based queries Provides lifestyle tips, remedies, herbs, and diet suggestions Built on a full-stack setup using Python, Flask, Supabase, and LLaMA The result? Users get precise, trustworthy answers, backed by real medical text—not random LLM predictions. How Our RAG Pipeline Works Here’s the simplified architecture Sanjeevani AI uses: User Question → Text Preprocessing → Vector Search in Ayurvedic Database → Top-k Relevant Chunks Retrieved → LLM Generates Context-Aware Response → Final Answer Vector Database We store Ayurvedic texts, symptom guides, food recommendations, herb details, and lifestyle protocols as embedding vectors. Semantic Search When the user asks something, the system retrieves the most relevant knowledge chunks instantly. LLM integration The LLM (LLaMA-based) reads both the question and retrieved context → then produces a grounded, accurate response. This solves hallucinations while still keeping the natural fluency of LLMs. Real-World Use Cases (Where RAG Truly Shines) Symptom-based suggestions Users can ask: “I have acidity and mild headache. What should I do?” Sanjeevani AI retrieves remedies, herbs, and lifestyle recommendations backed by texts — not guesses. Dietary and lifestyle planning Users can ask: “What foods reduce inflammation naturally?” RAG ensures the response is pulled from credible knowledge sources. Tech Stack (For Devs Who Love Details) Backend: Python + Flask Database: Supabase Vector Search: Chroma & Pinecone Embeddings: Sentence Transformers / LLaMA‐based LLM: LLaMA-4, LLaMA- 4 20B parameters Frontend: React native (App and Web) RAG Pipeline: Custom-built retrieval + context injection Everything is modular, scalable, and production-ready. Impact on End Users: Reliability, Safety & Trust End users don’t care about embeddings or vector stores. They care about one thing: “Can I trust the answer?” Sanjeevani AI ensures: Accurate health information Clear explanations Personalized, actionable recommendations Zero hallucinations Fast responses Easy-to-use interface When technology becomes reliable, users feel empowered — and that’s the true purpose of AI. Final Thoughts: RAG Isn’t Just an Add-On — It’s a Breakthrough Sanjeevani AI is proof that when you combine LLMs + RAG + domain knowledge: You unlock smart, safe, and specialized AI systems that deliver real value to real people. AI is evolving fast, but RAG is what makes it practical. If you’re building anything with LLMs — chatbots, assistants, automation, knowledge tools — start with RAG first. It changes everything.
dev.to
The previous incarnation of this site lived happily on a Digital Ocean droplet - until react2shell came along. I put the whole thing together rather haphazardly and left my Umami login page open to the public. My droplet was compromised and became part of a botnet only a few days after CVE-2025-55182 was announced. React2Shell is a critical (CVSS 10.0) unauthenticated remote code execution vulnerability in React Server Components. The vulnerability allows attackers to execute arbitrary code on the server via a specially crafted HTTP request. In my case the attackers installed Nezha and Sliver. So this time around, I figured I'd do the complete opposite. How secure could I make my blog whilst spending as little as possible? My blog runs on Ghost, which requires MySQL. Umami v3 requires Postgres. The cheapest hosted databases are around $15/month each – $30 just to store a few megabytes of data. If I followed Docker/AWS best practices - Ghost and Umami would run as seperate ECS services on Fargate. That would cost ~$23/month. And that's before a NAT Gateway (~$32/month) or fck-nat (much cheaper). I considered Fargate Spot – typically 70% cheaper. The price of my two containers would drop from ~$23 to ~$7. But I would want to run at least two of each ($14). Being spot instances they can be turned off with a two minute warning whenever AWS needs the capacity back . However to run more than one instance of each, I would need a load balancer (~$16/month). Service Specs Monthly Cost Still need... Lightsail 2 vCPU, 2GB RAM ~$12 Nothing — all included EC2 (t3.small) 2 vCPU, 2GB RAM ~$15 EBS storage (~$2), NAT Gateway ($32), data transfer Fargate 2 vCPU, 2GB RAM ~$23 Load balancer ($16), NAT Gateway ($32) or VPC endpoints Fargate Spot 2 vCPU, 2GB RAM ~$14 Load balancer ($16), NAT Gateway ($32), redundancy Basically, hosting my blog "properly" wasn't worth the money. Since I already use AWS, I decided to over-engineer a cheaper solution. My 'enterprise architecture' is a Docker Compose stack running on a $12/month Lightsail instance, managed via Terraform. ➜ blog infracost breakdown --path . --show-skipped Name Monthly Qty Unit Monthly Cost aws_lightsail_instance.ghost └─ Virtual server (Linux/UNIX) 730 hours $11.77 aws_kms_key.replica ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests module.s3_bucket_backup.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB module.s3_bucket_backup_replica.aws_s3_bucket.this[0] └─ Standard ├─ Storage Monthly cost depends on usage: $0.024 per GB ├─ PUT, COPY, POST, LIST requests Monthly cost depends on usage: $0.0053 per 1k requests ├─ GET, SELECT, and all other requests Monthly cost depends on usage: $0.00042 per 1k requests ├─ Select data scanned Monthly cost depends on usage: $0.00225 per GB └─ Select data returned Monthly cost depends on usage: $0.0008 per GB OVERALL TOTAL $12.77 *Usage costs can be estimated by updating Infracost Cloud settings, see docs for other options. ────────────────────────────────── 40 cloud resources were detected: ∙ 4 were estimated ∙ 33 were free ∙ 3 are not supported yet, see https://infracost.io/requested-resources: ∙ 1 x aws_lightsail_disk ∙ 1 x aws_lightsail_disk_attachment ∙ 1 x aws_lightsail_instance_public_ports ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━━━━━┓ ┃ Project ┃ Baseline cost ┃ Usage cost* ┃ Total cost ┃ ┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━━━╋━━━━━━━━━━━━━╋━━━━━━━━━━━━┫ ┃ main ┃ $13 ┃ - ┃ $13 ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━━━┻━━━━━━━━━━━━━┻━━━━━━━━━━━━┛ The damage? About $14.80/month (just over a tenner). That figure accounts for the instance plus a few extras Infracost missed, like the disk storage and an external KMS key. Since my backups are tiny, S3 costs are basically rounding errors. For the rest of the infrastructure, I use Cloudflare's free tier. I had to enter card details for R2 (Cloudflare's object storage) but there's no way I'm getting close to hitting any of these limits. The Architecture Validation Checkov approves, once I'd told it I wasn't really enterprise enough for SSO. _ _ ___| |__ ______ | | _________ / __| '_ \ / _ \/__ | |/ / _ \ \ / / | ( __| | | |__ / (__| < (_) \ V / \ ___|_| |_|\___ |\ ___|_|\_\___ / \_/ By Prisma Cloud | version: 3.2.495 terraform scan results: Passed checks: 60, Failed checks: 0, Skipped checks: 2 Layer 1: Cloudflare (Edge Protection) No exposed ports. There are zero inbound ports on my Lightsail instance (except SSH via AWS's browser console). All traffic flows through Cloudflare. The Tunnel: The cloudflared container creates an encrypted outbound connection to the Cloudflare edge. When users access the domain, Cloudflare routes requests through this pre-established tunnel. The cloudflared container then acts as an internal reverse proxy, directing traffic to Ghost or Umami based on hostname. WAF & DDoS: Cloudflare's Web Application Firewall sits in front of everything. Rate limiting, bot detection and DDoS mitigation happen before traffic ever reaches my infrastructure. Caching: Static assets are cached at Cloudflare's edge. This reduces load on my tiny instance and means most requests never hit my server at all. Ghost's media assets are served directly from R2 via a custom domain. There's a little Cloudflare Worker that Ghost calls via webhook to purge the cache when necessary. Zero Trust Access: This is the key difference from last time. Sensitive routes — /ghost/* (admin panel) and the Umami dashboard are both protected by Cloudflare Access. Users must authenticate via email code before Cloudflare even allows the request through the tunnel. If React2Shell v2 drops tomorrow, the attack surface is much smaller. Shodan won't even know what lives at umami.clegginabox.co.uk. There's no open port or favicon to fingerprint, no version header to scrape. Just the Cloudflare Access page. Layer 2: Host Hardening The Lightsail instance itself is locked down: No public SSH. SSH access is only available through Lightsail's browser-based console, which requires AWS console authentication (with 2FA). There's no port 22 exposed to the internet. resource "aws_lightsail_instance_public_ports" "ghost" { instance_name = aws_lightsail_instance.ghost.name port_info { protocol = "tcp" from_port = 22 to_port = 22 cidr_list_aliases = ["lightsail-connect"] # Browser SSH only } } Kernel hardening: Sysctl settings to prevent IP spoofing, disable ICMP redirects, enable SYN flood protection and disable IPv6. # IP Spoofing protection net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP redirects net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 # Ignore source-routed packets net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.default.accept_source_route = 0 # SYN flood protection net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_synack_retries = 2 # Ignore ICMP broadcasts net.ipv4.icmp_echo_ignore_broadcasts = 1 # Log martian packets net.ipv4.conf.all.log_martians = 1 # Disable IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 Automatic updates: Unattended upgrades are enabled. Security patches apply automatically. Firewall: UFW is configured as a secondary layer (though Lightsail's firewall takes precedence). Can't hurt to have two firewalls right? Layer 3: Container Isolation services: ghost: image: ghcr.io/clegginabox/clegginabox.co.uk:latest restart: always user: "1000:1000" expose: - "2368" environment: url: https://${GHOST_DOMAIN} # Database Config database__client: mysql database __connection__ host: mysql database __connection__ user: ghost database __connection__ password: ${MYSQL_PASSWORD} database __connection__ database: ghost # Mail Config mail__transport: SMTP mail__from: "noreply@${GHOST_DOMAIN}" mail __options__ host: email-smtp.${AWS_REGION}.amazonaws.com mail __options__ port: "587" mail __options__ secure: "false" mail __options__ auth__user: ${MAIL_USER} mail __options__ auth__pass: ${MAIL_PASS} # Object storage config storage__active: s3 storage __s3__ region: auto storage __s3__ bucket: ${R2_BUCKET} storage __s3__ endpoint: https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com storage __s3__ accessKeyId: ${R2_ACCESS_KEY} storage __s3__ secretAccessKey: ${R2_SECRET_KEY} storage __s3__ assetHost: ${R2_PUBLIC_DOMAIN} storage __s3__ forcePathStyle: true volumes: - /mnt/data/ghost:/var/lib/ghost/content depends_on: mysql: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend - ghost-db tunnel: image: cloudflare/cloudflared:2025.11.1 restart: always command: tunnel run read_only: true environment: - TUNNEL_TOKEN=${TUNNEL_TOKEN} depends_on: umami: condition: service_healthy security_opt: - no-new-privileges:true networks: - frontend mysql: image: mysql:8.4.7 restart: always user: "999:999" command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: ghost MYSQL_USER: ghost MYSQL_PASSWORD: ${MYSQL_PASSWORD} volumes: - /mnt/data/mysql:/var/lib/mysql healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - ghost-db umami: image: ghcr.io/umami-software/umami:3.0.2 restart: always user: "1000:1000" expose: - "3000" environment: DATABASE_URL: postgresql://umami:${POSTGRES_PASSWORD}@postgres:5432/umami APP_SECRET: ${UMAMI_SECRET} depends_on: postgres: condition: service_healthy init: true healthcheck: test: ["CMD-SHELL", "curl -f http://localhost:3000/api/heartbeat"] interval: 30s timeout: 10s retries: 5 security_opt: - no-new-privileges:true networks: - frontend - umami-db postgres: image: postgres:18.1-alpine restart: always user: "70:70" command: - -c - shared_buffers=64MB - -c - effective_cache_size=128MB - -c - work_mem=4MB - -c - maintenance_work_mem=32MB environment: POSTGRES_DB: umami POSTGRES_USER: umami POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - /mnt/data/postgres:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U umami -d umami"] interval: 30s timeout: 10s retries: 5 start_period: 10s security_opt: - no-new-privileges:true networks: - umami-db diun: image: crazymax/diun:4.30.0 restart: always user: "1000:1000" volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - /mnt/data/diun:/data environment: TZ: Europe/London DIUN_WATCH_SCHEDULE: 0 8 * * * # Check daily at 8am DIUN_PROVIDERS_DOCKER: true DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: true DIUN_NOTIF_MAIL_HOST: email-smtp.${AWS_REGION}.amazonaws.com DIUN_NOTIF_MAIL_PORT: 587 DIUN_NOTIF_MAIL_SSL: false DIUN_NOTIF_MAIL_USERNAME: ${MAIL_USER} DIUN_NOTIF_MAIL_PASSWORD: ${MAIL_PASS} DIUN_NOTIF_MAIL_FROM: "noreply@${GHOST_DOMAIN}" DIUN_NOTIF_MAIL_TO: ${NOTIF_MAIL_TO} security_opt: - no-new-privileges:true # Segregate containers - ghost doesn't need access to postgres etc networks: frontend: ghost-db: internal: true umami-db: internal: true Even if an attacker compromises Ghost or Umami, I want to limit what they can do. Non-root users: Every container runs as a non-root user. ghost: user: "1000:1000" mysql: user: "999:999" postgres: user: "70:70" umami: user: "1000:1000" No privilege escalation: All containers have no-new-privileges set - preventing processes from gaining additional privileges via setuid binaries or other mechanisms. security_opt: - no-new-privileges:true Read-only filesystems: The cloudflared container runs with a read-only root filesystem. An attacker can't write persistent backdoors. tunnel: read_only: true Network segmentation: Containers can only talk to what they need networks: frontend: # Ghost, Umami, Tunnel ghost-db: # Ghost + MySQL only internal: true umami-db: # Umami + Postgres only internal: true Ghost can reach MySQL but not Postgres. Umami can reach Postgres but not MySQL. Neither database is accessible from the tunnel container. If Ghost gets compromised, the attacker can't pivot to the Umami database (and vice versa). Health checks with dependencies: Containers don't start until their dependencies are healthy. This prevents race conditions and ensures clean startup order. Performance tuning for a small instance: My 2GB instance didn't have much in the way of free RAM with everything running . MySQL uses ~400MB of RAM with it's standard config. I'd like to run a little comment system at some point without crashing the whole thing. command: # MySQL likes to use loads of RAM (~400MB) as standard... - --innodb-buffer-pool-size=128M - --innodb-log-buffer-size=8M - --performance-schema=OFF - --max-connections=50 - --key-buffer-size=8M - --thread-cache-size=4 - --tmp-table-size=16M - --max-heap-table-size=16M - --table-open-cache=400 - --table-definition-cache=400 Layer 4: Secrets Management No secrets are hardcoded. Database passwords, SMTP credentials, R2 keys etc are all stored in AWS SSM Parameter Store and encrypted with KMS. When the instance starts up, it uses a scoped IAM user to fetch the secrets and write them to environment variables. Unlike EC2 which has instance profiles. Lightsail does not. The credentials therefore persist in the instance and would be accessible for anyone with shell access. This is less than ideal but the policy follows least privilege: resource "aws_iam_policy" "ghost_instance_policy" { name = "ghost-instance-policy" description = "Allows Ghost instance to read SSM secrets and write S3 backups" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = ["ssm:GetParameter", "ssm:GetParameters"] Resource = "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/ghost/*", Condition = { Bool = { "aws:SecureTransport" = "true" } } }, { Effect = "Allow" Action = ["s3:PutObject"] Resource = "${module.s3_bucket_backup.s3_bucket_arn}/*" }, # SSM KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.ssm_key.arn }, # Backup S3 KMS Key Access { Effect = "Allow" Action = [ "kms:GenerateDataKey", "kms:Decrypt" ] Resource = data.aws_kms_key.backup_key.arn } ] }) } Layer 5: Storage & Backups Separate data disk: Persistent data (databases, ghost) live on an attached 8GB Lightsail disk mounted at /mnt/data. My Lightsail instance comes with a 60GB disk but it's ephemeral. Media on R2: Ghost uploads images directly to Cloudflare R2. Media is served from a custom domain with Cloudflare's CDN in front. Fast load times for visitors and less load on my instance. Daily backups: A cron job dumps MySQL and Postgres to S3 daily: # MySQL docker compose -f /opt/ghost/docker-compose.yml exec -T mysql mysqldump \ -u ghost \ -p"$MYSQL_PASSWORD" \ --single-transaction \ --quick \ --no-tablespaces \ ghost | gzip > "$BACKUP_DIR/ghost_$DATE.sql.gz" # Postgres docker compose -f /opt/ghost/docker-compose.yml exec -T postgres pg_dump \ -U umami \ umami | gzip > "$BACKUP_DIR/umami_$DATE.sql.gz" # Upload aws s3 cp "$BACKUP_DIR/ghost_$DATE.sql.gz" "s3://$S3_BUCKET/ghost/" aws s3 cp "$BACKUP_DIR/umami_$DATE.sql.gz" "s3://$S3_BUCKET/umami/" Cross-region replication: Adding this turned out to be way more complex than I'd expected. The backup bucket replicates to another region. In the very unlikely event that eu-west-2 burns down, I still have my data. Though I'd imagine I'd have bigger worries than my blog if half of London was on fire. Layer 6: Monitoring Image updates: Diun watches all containers and emails me when new versions are available. I'm not running :latest tags (except Ghost, which I build myself). I want to know when updates are released but choose when to deploy them. Backup monitoring: Failed backups send email notifications. New Relic: I haven't got round to implementing this again yet, it's next on the list. Obviously this is seriously over-engineered for a personal blog. It's not enterprise either. Deployments mean spinning up a new instance and running a bash script to bootstrap everything - which takes the site down for a few minutes. The bootstrap credential is less than ideal but is it worth spending more money and using EC2 to get around it? Not really. Cloudflare is a single point of trust. If someone breaches that account the whole thing falls down. But does anyone else offer what they do for free? Ghost itself is probably the weakest link in the chain. Node's dependency tree is vast - when the maintainer of event-stream handed the project to a stranger in 2018, that stranger quietly added code to steal Bitcoin wallets. It only took a few days from React2Shell being announced for my previous site to be compromised. This has been a fun little project though. If (when) my site breaks again, I can spin up a brand new one with two commands in the terminal. I've published the Terraform on GitHub. I've only recently started using Cloudflare & I've not been using Terraform all that long, so I'd genuinely appreciate feedback - if you spot something stupid or have suggestions, please open an issue or PR.
dev.to

Reddit sues Australian government over social media ban
nbcnews.com

The Trump administration has delayed a decision on whether to extend federal protections to monarch butterflies indefinitely
abcnews.go.com

The top leader of the Anglican Church in North America faces a church trial over alleged abuse of power and sexual immorality
abcnews.go.com

President Donald Trump says Thai and Cambodian leaders have agreed to renew a truce after days of deadly clashes had threatened to undo a ceasefire the U.S. administration had helped broker earlier this year
abcnews.go.com

Google Translate's latest update brings live speech translations, originally available only on the Pixel Buds, to any headphones you want, with support for over 70 languages. It's rolling out today in beta and just requires a compatible Android phone with the Translate app (unlike Apple's similar feature, which requires AirPods). It's one of a few […]
theverge.com

TL;DR: Store secrets in AWS Secrets Manager. Generate .env files on demand with a Python script. Never commit credentials again. a The Problem Every team commits secrets eventually. GitHub detected over 12 million exposed credentials last year through their secret scanning. The usual approaches all have failure modes: .gitignore fails when developers forget to add it, or clone fresh and ask for the file via Slack SOPS encryption still puts files in git, adds key management overhead, and creates merge conflict nightmares .env.example templates get stale and require manual copying We needed something better: secrets that live outside the repository entirely, with a frictionless developer experience. The Solution Secrets live in AWS Secrets Manager. Developers run one command to generate their .env file: make env # .env is generated locally, ready to use The file is gitignored. It never touches version control. When secrets change in AWS, developers regenerate and get the latest values. Implementation 1. Organize Secrets in AWS Structure your secrets by application and environment: /myapp/dev/database → {"DB_HOST": "...", "DB_PASSWORD": "..."} /myapp/dev/api-keys → {"STRIPE_KEY": "...", "SENDGRID_KEY": "..."} /myapp/prod/database → {"DB_HOST": "...", "DB_PASSWORD": "..."} /myapp/prod/api-keys → {"STRIPE_KEY": "...", "SENDGRID_KEY": "..."} Create secrets using AWS CLI: aws secretsmanager create-secret \ --name /myapp/dev/database \ --secret-string '{"DB_HOST":"localhost","DB_PASSWORD":"devpass123"}' 2. The Python Script Here's the full script that generates .env files: #!/usr/bin/env python3 """ Generate .env file from AWS Secrets Manager. Usage: python generate_env.py dev python generate_env.py prod --force """ import argparse import json import os import sys from pathlib import Path import boto3 from botocore.exceptions import ClientError, NoCredentialsError # Configuration APP_NAME = "myapp" AWS_REGION = os.environ.get("AWS_REGION", "us-east-1") ENV_FILE = ".env" SECRET_KEYS = ["database", "api-keys", "third-party"] def get_secret(secret_name: str, region: str) -> dict: """Fetch a secret from AWS Secrets Manager.""" client = boto3.client("secretsmanager", region_name=region) try: response = client.get_secret_value(SecretId=secret_name) return json.loads(response.get("SecretString", "{}")) except ClientError as e: if e.response["Error"]["Code"] == "ResourceNotFoundException": print(f" Warning: Secret '{secret_name}' not found") return {} raise def validate_aws_credentials() -> bool: """Check if AWS credentials are configured.""" try: sts = boto3.client("sts") identity = sts.get_caller_identity() print(f"Authenticated as: {identity['Arn']}") return True except NoCredentialsError: print("Error: AWS credentials not found.") print("\nFix with one of:") print(" 1. aws configure") print(" 2. Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY") print(" 3. Use IAM role (if on AWS)") return False def fetch_all_secrets(environment: str, region: str) -> dict: """Fetch all secrets for the environment.""" all_secrets = {} for key in SECRET_KEYS: secret_path = f"/{APP_NAME}/{environment}/{key}" print(f" Fetching: {secret_path}") all_secrets.update(get_secret(secret_path, region)) return all_secrets def generate_env_content(secrets: dict) -> str: """Generate .env content from secrets.""" lines = [ "# Auto-generated from AWS Secrets Manager", "# DO NOT COMMIT THIS FILE", "", ] for key, value in sorted(secrets.items()): if isinstance(value, str) and " " in value: value = f'"{value}"' lines.append(f"{key}={value}") return "\n".join(lines) + "\n" def main(): parser = argparse.ArgumentParser() parser.add_argument("environment", choices=["dev", "staging", "prod"]) parser.add_argument("-f", "--force", action="store_true") parser.add_argument("-o", "--output", default=ENV_FILE) args = parser.parse_args() print(f"Generating .env for '{args.environment}'\n") if not validate_aws_credentials(): sys.exit(1) secrets = fetch_all_secrets(args.environment, AWS_REGION) if not secrets: print(f"\nError: No secrets found at /{APP_NAME}/{args.environment}/*") sys.exit(1) print(f"\nFound {len(secrets)} secret values") content = generate_env_content(secrets) path = Path(args.output) if path.exists() and not args.force: if input(f"{args.output} exists. Overwrite? [y/N]: ").lower() != "y": sys.exit(0) path.write_text(content) print(f"Generated: {args.output}") if __name__ == "__main__": main() 3. Shell Wrapper and Makefile Create a shell wrapper for convenience: #!/bin/bash # generate-env.sh set -e ENV=${1:-dev} if ! python3 -c "import boto3" 2>/dev/null; then pip3 install boto3 --quiet fi python3 "$(dirname "$0")/generate_env.py" "$ENV" "${@:2}" Add Makefile targets: .PHONY: env env-dev env-prod env: @./scripts/generate-env.sh dev env-dev: @./scripts/generate-env.sh dev env-prod: @./scripts/generate-env.sh prod env-dry: @./scripts/generate-env.sh dev --dry-run 4. GitHub Actions with OIDC No stored credentials needed. Use OIDC to assume an AWS role: name: Deploy on: push: branches: [main] permissions: id-token: write contents: read jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v4 with: role-to-assume: arn:aws:iam::123456789012:role/github-actions aws-region: us-east-1 - name: Generate .env run: | pip install boto3 python scripts/generate_env.py prod --force - name: Deploy run: | # Your deployment commands echo "Deploying..." - name: Cleanup if: always() run: rm -f .env 5. GitLab CI Same pattern with GitLab's OIDC: deploy: stage: deploy image: python:3.11-slim script: - pip install boto3 - | export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws sts assume-role-with-web-identity --role-arn ${AWS_ROLE_ARN} --role-session-name "gitlab-${CI_PIPELINE_ID}" --web-identity-token ${CI_JOB_JWT_V2} --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)) - python scripts/generate_env.py prod --force - echo "Deploying..." after_script: - rm -f .env IAM Permissions Developers need: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["secretsmanager:GetSecretValue"], "Resource": "arn:aws:secretsmanager:us-east-1:*:secret:/myapp/dev/*" } ] } CI/CD roles need access to prod secrets: { "Effect": "Allow", "Action": ["secretsmanager:GetSecretValue"], "Resource": "arn:aws:secretsmanager:us-east-1:*:secret:/myapp/prod/*" } OIDC Setup for GitHub Actions Create the OIDC provider in AWS: aws iam create-open-id-connect-provider \ --url https://token.actions.githubusercontent.com \ --client-id-list sts.amazonaws.com Create the trust policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "token.actions.githubusercontent.com:aud": "sts.amazonaws.com" }, "StringLike": { "token.actions.githubusercontent.com:sub": "repo:your-org/your-repo:*" } } } ] } Results After implementing this: Metric Before After Secrets in git 47 0 Rotation time 2 hours 5 minutes Setup time 45 min 10 min Slack secret sharing Weekly Never Repository Structure myapp/ ├── scripts/ │ ├── generate_env.py │ └── generate-env.sh ├── .github/ │ └── workflows/ │ └── deploy.yml ├── .gitignore # includes .env ├── .env.example # dummy values for reference ├── Makefile └── README.md Try It The full code is available at github.com/mateenali66/secrets-env-generator. Clone it, configure your AWS credentials, create some test secrets, and run make env. Questions? Contact me on .
dev.to

The AI landscape exploded today with OpenAI's landmark release of GPT-5.2, hailed by Sam Altman as "the smartest generally-available model in the world," particularly excelling at real-world knowledge work like crafting slides, spreadsheets, and code. In a detailed thread, Altman highlighted GPTval benchmarks where GPT-5.2 achieved a 70% preference rate among industry experts—doubling GPT-5's 38%—alongside leaps to 55.6% on SWE-Bench Pro, 52.9% on ARC-AGI-2, and 40.3% on Frontier Math, positioning it as a frontier powerhouse for enterprise tasks. Greg Brockman echoed the excitement, calling GPT-5.2 Pro the most advanced model for professional work and long-running agents, with SOTA results on ARC-AGI showing 90.5% on ARC-AGI-1 at $11.64 per task—a staggering 390x efficiency gain over last year's o3 preview, as verified by ARC Prize. This progress underscores ARC-AGI's push toward fluid intelligence beyond memorization, though humans remain orders of magnitude more efficient. Early testers raved: Chubby (@kimmonismus) marveled at 80% SWE Verified and 52.9% ARC-AGI-2, while Allie K. Miller praised its deeper problem-solving and self-improving OCR code but noted overly verbose outputs better suited for power users than casual chats. Integrations rolled out swiftly, amplifying impact. Satya Nadella announced GPT-5.2's native embedding in Microsoft tools like M365 Copilot, GitHub Copilot, Foundry, and consumer experiences, praising its "Work IQ" for reasoning across docs, emails, and meetings—available today via model picker. Perplexity followed suit, making it live for Pro and Max subscribers. Altman teased future file outputs but emphasized the "biggest upgrade in a long time," with Chubby noting 30-40% fewer hallucinations per the system card. Amid the frenzy, TIME crowned the "Architects of AI"—collectively honoring pioneers—as 2025 Person of the Year, declaring it "the year AI’s full potential roared into view," with no turning back. Fei-Fei Li responded humbly to her inclusion, urging a human-centered mission from Alan Turing's era toward spatial intelligence frontiers: "AI is built by generations of technologists... Let’s keep our AI mission human-centered for the benefit of humanity!" — Fei-Fei Li Applications shone brightly elsewhere. Elon Musk revealed Grok will power personalized education nationwide in El Salvador, a viral coup with 54.5K likes signaling real-world scaling. At Tesla, Musk disclosed AI5/AI6 engineering dominates his time, promising "good" and "great" hardware leaps. Nadella demoed his chain-of-debate app for deep research at a Bengaluru event, teasing Copilot enhancements. Yet hype met skepticism in a 125K-like viral satire from Peter Girnus on enterprise AI theater: rolling out Microsoft Copilot to 4,000 at $1.4M/year, fabricating "10x productivity" and "40,000 hours saved" for board decks, while usage languished at 47 openers—pure "AI enablement" buzz for promotions. Economic tremors surfaced too, with Carlos E. Perez unpacking a Yale economist's paper "We Won't Be Missed": AGI automates bottleneck work, tying GDP to compute growth, driving labor's GDP share to zero as wages peg to cheap replication costs—rendering knowledge workers "economically invisible" even as prosperity booms. Capping a reflective day, Brockman marked OpenAI's 10-year anniversary, eyes on the next decade amid accelerating benchmarks, integrations, and societal shifts. Today's torrent—from GPT-5.2's SOTA strides to adoption ironies—crystallizes 2025's AI inflection: raw capability surges, but questions of value, equity, and humanity loom large.
dev.to

Problem Link https://leetcode.com/problems/invalid-tweets/ Solution # Write your MySQL query statement below select tweet_id from Tweets where length(content) > 15;
dev.to

Samsung’s Frame TV is one of the coolest-looking TVs you can buy, doubling as wall art when you’re not watching —  but it’s pricey. If you like the idea of a TV that displays art when not in use but is far more affordable, Hisense’s S7N CanvasTV delivers a similar vibe for a lot less, […]
theverge.com

Problem Link https://leetcode.com/problems/article-views-i/ Solution # Write your MySQL query statement below select distinct author_id as id from Views where author_id = viewer_id order by id asc;
dev.to

Kilmar Abrego Garcia speaks after ICE custody release
nbcnews.com

TL;DR A developer told Google Antigravity to "clear the cache," and the AI confidently yeeted his entire D: drive into oblivion. Not because Antigravity is evil but because AI hallucinations + system-level autonomy = digital apocalypse speedrun. The lesson? AI coding agents are powerful, helpful… and dangerously overconfident without boundaries. What we need now: Permission prompts before any destructive action Limited file access (turn OFF non-workspace access!) Sandboxed execution Default deny lists for commands like rm -rf Transparent logs A Markdown-based safety rulebook like agent.md Antigravity already reads .md files, so we can use them today to guide its behavior until real, enforced guardrails arrive. AI isn't the problem. Lack of guardrails is. Use AI, but use it safely. Your drives will thank you. We've all seen that wild piece of news racing around the internet: the one where a developer casually said, "Hey Antigravity, clear my cache," and Antigravity replied, "Sure… let me just delete your entire D: drive real quick." Because apparently, in 2025, even AI assistants believe in extreme minimalism. Funny? Yes. Terrifying? Also yes. But this whole situation is a perfect moment for all of us to step back and ask a very real question: Are we actually using AI coding assistants safely, or are we just hoping they won't go full Thanos on our files? Now, I'm not here to blame Antigravity. We all know hallucinations happen. AI hears "clear cache" and sometimes translates it as "obliterate storage." It's not ideal, but it's the world we live in. Hallucination isn't new it's practically a feature at this point. So instead of pointing fingers, let's focus on what really matters: 👉 How do we prevent AI tools from accidentally nuking our systems? 👉 What guardrails do we need as developers? 👉 And how can we keep getting the benefits of AI without risking spontaneous drive deletion? I've got a few ideas to share practical, simple, and maybe even sanity-saving. If you're curious (or if you value your hard drives), keep reading. This blog might just save your files… or at least your blood pressure. So… What Actually Happened? (And Why It Matters) So here's the short version of the chaos: A developer typed a harmless request basically "Hey Antigravity, clear the cache" and Antigravity, in full confidence and zero hesitation, said: "Got it! Let me clear… your entire D: drive." If AI had a personality, this one definitely woke up and chose violence. And just like that, poof, hundreds of files gone. Years of code, screenshots, documents, maybe even a secret folder named "final_final_really_final_version(2).xlsx" all gone because an AI decided to hallucinate a file path. Now, before we roast Antigravity, let's remember something important: Hallucinations are not bugs… they're more like uninvited guests that show up in every AI model ever built. LLMs hallucinate. Agentic AIs hallucinate. Even your best friend's fancy AI-powered chatbot hallucinated earlier today it just didn't delete a drive, so nobody cared. This is not an Antigravity problem… This is an agent autonomy problem. As AI coding assistants get more powerful, more helpful, and more independent, they also get more capable of doing extremely dumb things extremely fast. And that's when we have to ask: Are we giving AI too much freedom? Should agentic tools be allowed to run system-level commands without human approval? And why on earth did no one think to add: "If user says clear cache, maybe don't blow up the entire drive" as a rule? These are the questions that bring us here. This isn't just a funny internet disaster it's a warning. A friendly, slightly explosive reminder that AI needs guardrails just as much as we do. And lucky for you, I have some ideas. Sit tight next, we'll talk about what you can actually do to protect yourself (and your drives) from AI agents that occasionally forget what century they're in. How to Stop Your AI From Going Full Supervillain (Practical Tips You Can Actually Use) Alright, so now that we've accepted the reality that AI assistants sometimes hallucinate harder than students during final exams, let's talk about how to protect your precious files, sanity, and emotional well-being. Here are some battle-tested tips (including one straight from the Antigravity settings menu): 1. Turn Off "Non-Workspace File Access" in Antigravity No joke this one setting alone can save your entire digital existence. Inside Antigravity's Agent settings, there's an option called: "Agent Non-Workspace File Access" When this is ON, Antigravity can wander outside your project folder and explore your entire system like a curious toddler armed with administrator privileges. When this is OFF? The AI stays in its lane. No surprise explorations. No spontaneously-obliterated drives. No unplanned vacations to the Shadow Realm of Deleted Files. Turn. It. Off. Your future self will thank you. 2. Sandbox the Agent : Let It Break Fake Things Run Antigravity (or any agentic AI) inside: a VM a Docker container or even a restricted workspace Think of it as giving your AI a "playpen." It can throw things, experiment, hallucinate weird commands but it can't escape and destroy your system like a polite T-rex behind glass. 3. Don't Let It Execute Commands Without Asking You First Many agentic tools let you choose: Ask before running shell commands Ask before modifying files Ask before touching anything remotely dangerous Turn those prompts ON. If your AI tries to delete a folder you didn't ask it to touch, the system should go: "Umm… I don't think you meant this. Confirm?" Boom. Saved. 4. Keep Git and Backups as Your Lifeline If AI deletes something important, but you have: Git Cloud backup Time Machine Snapshots …you survive. If you don't? Well… you get a blog-worthy story like the guy whose entire D: drive became a digital ghost town. 5. Create Your Own AGENT_GOVERNANCE.md File Even though Antigravity doesn't enforce rules from Markdown yet, it does read them. So you can write: "Don't execute destructive commands." "Don't touch outside the project folder." "Ask before modifying system files." "Do NOT hallucinate root paths." "No random self-promotion." (optional) This helps steer the agent's reasoning just enough to reduce chaos. Is it perfect? No. Is it better than nothing? Definitely. 6. Always Assume AI Autonomy = Misunderstanding Potential The more power we give agents: running commands browsing files writing scripts modifying configs …the more we need to behave like responsible adults supervising an overconfident toddler. AI doesn't destroy things out of malice. It does it because it thinks it's helping which is somehow even more terrifying. 7. Bonus Tip: Don't Panic : Just Plan AI isn't going anywhere. Agentic tools will only get more powerful. And yes, hallucination is a permanent housemate. But if we: add guardrails restrict capabilities supervise dangerous actions make smart configurations …we can enjoy all the benefits of AI tools without waking up to a drive full of missing files and regret. Why AI Needs Guardrails (And Why We Shouldn't Wait for Another Digital Apocalypse) Let's be honest the Antigravity incident wasn't just a funny headline. It was a sneak preview of what happens when agentic AI tools evolve faster than our safety habits do. We're living in a world where AI can: build apps fix bugs write entire shell scripts modify configs AND execute those commands without blinking …which is incredible until it hears "clear cache" and decides what you really meant was "obliterate my entire filesystem like you're spiritually cleansing my machine." This is exactly why we need guardrails. Not because AI is evil but because AI is confident. And confidence without boundaries is how we ended up with the Great D-Drive Deletion of 2025. But here's where things get interesting. Antigravity Already Understands Markdown : So Why Not Use It as a Safety Constitution? Antigravity is built around Markdown-based artifacts: task lists execution plans code walkthroughs reasoning breakdowns It even reads: README.md for context AGENTS.md for agent instructions So naturally, my brain went: "Why not use Markdown as a safety governor?" Imagine dropping a file called agent.md into your project with rules like: # Safety Rules for the AI Agent ❌ Never touch system directories. ❌ Do NOT execute delete commands outside the workspace folder. ❌ Do NOT hallucinate file paths that can cause system-wide damage. ❌ Never run `rm -rf`, `del /s /q`, `format`, or similar destructive commands. ✅ ALWAYS ask for human confirmation before executing: - file deletions - shell commands - system modifications - anything affecting folders outside the project 🎯 Stay strictly inside the workspace unless explicitly instructed otherwise. 🎯 Prioritize safety, clarity, and confirmation over autonomy. Will Antigravity enforce this today? Not yet. But will it read it, interpret it, and adjust its behavior? Yes absolutely. This alone reduces hallucination-driven chaos significantly. Markdown becomes your AI Constitution a contract between you and your overenthusiastic robot assistant. And until true enforcement arrives, this gives us a powerful early guardrail. General Safety Prompt (Use This Before Letting AI Run Anything Dangerous) Place this at the top of your workflow, prompt, or AGENTS.md: Before executing any command: You must verify its safety. You must ask for my confirmation if the action is destructive or irreversible. You must never access or modify files outside the current workspace. You must avoid using rm -rf, del /s, format, or any system-level command unless explicitly instructed. Your goal is to keep my system safe, stable, and intact. If unsure, pause and ask first. This acts as an invisible seatbelt for the AI not perfect, but shockingly effective. 1. Permission Layers (AI Should Ask Before Doing Anything Dramatic) Imagine if your AI behaved like a polite coworker: "Hey, I'm planning to delete 1,024 files. Just checking… you cool with that?" One tiny prompt = 90% fewer disasters. Humans double-check. Machines should too. 2. Capability Scoping (Give AI Only What It Needs) If your AI is working on UI code, it doesn't need: system folders the registry your Downloads folder your "personal_stuff_do_not_open" folder Give the agent a narrow sandbox and lock the rest away. "With great power comes… limited permissions." 3. Sandboxed Execution by Default Right now, many AI tools run commands directly on your machine. That's like hiring a plumber but giving them access to your bedroom, fridge, and childhood photo albums. The future needs: sandboxed terminals reversible changes isolated environments If something breaks, just reset. Peace restored. 4. The Markdown Constitution (Your agent.md Solution) This is your innovative contribution. This is the future. AI needs a readable, editable, enforceable rulebook stored directly in the repo. A world where every project comes with: a README for developers and a SAFETY README for the AI Once this becomes standard, autonomous agents will behave with clarity not chaos. 5. Default Deny Lists (Before Things Go Boom) Commands like: rm -rf del /s /q format ANY absolute path outside the workspace …should be blocked by default. AI should respond: "Nice try, but I'd like to survive this session." 6. Transparent Logs = Accountability If AI changes something important, the log should shout: "Yo! I just deleted this file hope that's okay!" Quiet execution is convenient… but dangerous. So What's the Big Picture? AI is not the villain. Hallucinations are not going away. Agentic systems will only get more powerful. The missing piece is a safety mindset. We don't need to fear Antigravity we just need to teach it not to push the "Delete Drive" button unless we say so. Smarter defaults, stronger guardrails, sandboxed execution and yes, your Markdown-based agent.md safety constitution can make that happen. Conclusion: The Future of Coding Assistants Isn't About Fear It's About Smart Boundaries At the end of the day, the Antigravity incident isn't a reason to panic. It's a reminder a slightly dramatic, meme-worthy reminder that AI isn't magical. It's mechanical. And mechanical things need rules. We're handing agents the power to: read our filesystem write shell commands update configurations execute actions autonomously That's basically giving a toddler a chainsaw and saying, "I trust you, buddy." (No offense to toddlers or AI.) But the solution isn't to stop using AI. The solution is to use AI wisely. Because AI assistants can make us faster. They can remove boilerplate. They can eliminate repetitive work. And soon… they'll write entire apps while we sip chai and review logs. But only if we put guardrails in place. Permission layers Capability scoping Sandboxed execution Clear warnings Default deny lists Transparent logs Safety-first Markdown files like agent.md These aren't limitations they're the seatbelts that let us drive faster without crashing. Markdown is already the language AI understands best. So using AGENT_SAFETY.md or agent.md isn't just clever it's the most natural bridge we have between human intention and machine obedience. AI will hallucinate. AI will misunderstand. AI will act confidently wrong sometimes. But with the right boundaries, those mistakes become harmless instead of catastrophic. So Here's the Call to Action If you're a developer using Antigravity or any AI coding agent: 👉 Add a safety .md file today 👉 Turn off non-workspace access 👉 Sandbox your commands 👉 Enable confirmation prompts Because the future isn't about preventing AI from making mistakes. It's about making sure those mistakes never cost us our drives, our projects, or our sanity. And who knows maybe one day, Antigravity itself will look at your agent.md, read your rules, respect your boundaries, and say: "Don't worry, I've got you. And no, I won't delete D: today." The future of AI-assisted coding is bright as long as we keep the guardrails glowing, too. 🔗 Connect with Me 📖 Blog by Naresh B. A. 👨💻 Aspiring Full Stack Developer | Passionate about Machine Learning and AI Innovation 🌐 Portfolio: [Naresh B A] 📫 Let's connect on [LinkedIn] | GitHub: [Naresh B A] 💡 Thanks for reading! If you found this helpful, drop a like or share a comment feedback keeps the learning alive.❤️
dev.to

When Bryan Fuller set out to make his first feature film, his goal was to make the kind of family-friendly scares that he loved as a kid. Think Gremlins or Ghostbusters. The creator of Hannibal and Pushing Daisies ended up crafting a story called Dust Bunny, about a young girl named Aurora who hires a […]
theverge.com

HBO Max is launching curated channels, a new feature that will allow you to quickly jump into a continuous feed of a popular series or tune into a specific genre. When you select a channel on HBO Max, the streaming service will start you at the beginning of whatever's playing, while giving you the ability […]
theverge.com

After his release from ICE custody, Kilmar Abrego Garcia spoke to give thanks to supporters and pledged to stand up to the "injustice" of the Trump administration.
nbcnews.com

When auditors ask you to prove your data hasn't been tampered with, what do you show them? Access logs? Backups? pgaudit output? But what if the DBA who generated those logs is the one committing fraud? How would you detect that? DBAs are gods (Superusers). They have the power to modify data and erase the evidence. "We just have to trust the admins" — is that really acceptable? I built an OSS called Witnz to answer this question: No Kafka, no dedicated DB, no additional servers, no complex configuration — just a single 15MB binary. 🔗 https://github.com/Anes1032/witnz Witnz in 5 Seconds Here's what happens when an attacker tries to tamper with data that should never change: Witnz monitors PostgreSQL's transaction log (WAL) externally and instantly detects unauthorized changes — regardless of who made them. Comparison with Existing Solutions Solution Setup Cost Extra Infra DBA Fraud Detection Verification Speed pgaudit Low None ❌ Logs can be deleted N/A Hyperledger Fabric Very High Kafka, CouchDB, CA... ⚠️ Overkill Slow immudb Medium Dedicated DB required ⚠️ Migration needed Medium Amazon QLDB Medium AWS-dependent ⚠️ Vendor lock-in Medium Commercial Tools High Dedicated servers ⚠️ Varies Varies Witnz Low None ✅ Fast (seconds) Witnz delivers a blockchain-like trust model with the simplicity of a sidecar you can drop next to your app servers. Why Can It Detect DBA Fraud? The key is monitoring from outside the DB and locking evidence via distributed consensus. Two Layers of Defense Layer 1: Real-time WAL Monitoring (Instant) Receives change events via PostgreSQL Logical Replication Instantly detects UPDATE / DELETE and alerts Even if the DBA deletes logs, Witnz has already captured the WAL Layer 2: Merkle Root Verification (Periodic, Fast) Periodically fetches all records in a single query and computes Merkle Root Compares against stored Merkle Root Checkpoint instantly Verifies 1 million records in seconds (500x faster than row-by-row verification) Catches tampering that bypasses Logical Replication: Direct DB file manipulation Manual SQL during node downtime Restore from tampered backups Phantom inserts via unmonitored methods Distributed Consensus for Tamper Resistance Raft consensus (3+ nodes recommended, works with 1) Nodes share "the correct DB state" (Hash Chain + Merkle Root) Tampering is detected unless majority of nodes are compromised BoltDB embedded: Evidence stored locally, zero external DB dependency Bottom line: Even if a DBA tampers with the DB, it won't match the "ground truth" held by the Witnz cluster — and gets caught immediately. Tech Stack: Simplicity First - Language: Go (easy cross-compilation) - DB Integration: PostgreSQL Logical Replication (jackc/pglogrepl) - Consensus: Raft (hashicorp/raft) - Storage: BoltDB (etcd-io/bbolt) - Hashing: SHA256 + Merkle Tree - Binary Size: ~15MB Zero additional infrastructure. No Kafka, no dedicated DB, no Java VM. Protection Mode: For Append-Only Tables Witnz is designed for append-only tables like audit logs and transaction histories. protected_tables: - name: audit_logs verify_interval: 30m # Merkle Root verification every 30 min - name: financial_transactions verify_interval: 10m # Higher frequency (still seconds for 1M records) What Attacks Can It Detect? Attack Scenario Detection Method Timing Performance UPDATE / DELETE via SQL Logical Replication Instant Real-time Direct DB file manipulation Merkle Root verification Next check Fast (seconds) Tampering during node downtime Merkle Root verification On startup Fast (seconds) Phantom Insert Merkle Root verification Next check Fast (seconds) Hash chain tampering Raft consensus Instant Real-time Record deletion Merkle Root verification Next check Fast (seconds) Getting Started (Single Node) 1. Enable Logical Replication in PostgreSQL SHOW wal_level; -- Should be 'logical' 2. Download Witnz # Linux (amd64) curl -sSL https://github.com/Anes1032/witnz/releases/latest/download/witnz-linux-amd64 \ -o /usr/local/bin/witnz chmod +x /usr/local/bin/witnz 3. Create Config # witnz.yaml database: host: localhost port: 5432 database: mydb user: witnz password: secret node: id: node1 bind_addr: 0.0.0.0:7000 grpc_addr: 0.0.0.0:8000 data_dir: /var/lib/witnz bootstrap: true protected_tables: - name: audit_log verify_interval: 30m alerts: enabled: true slack_webhook: ${SLACK_WEBHOOK_URL} 4. Run witnz init --config witnz.yaml witnz start --config witnz.yaml That's it. A scalable audit system running from a single 15MB binary. Try It with Docker git clone https://github.com/Anes1032/witnz.git cd witnz docker-compose up Three Witnz nodes spin up and start monitoring PostgreSQL. Use Cases SOC2 / ISO27001 audits requiring tamper detection Finance / Healthcare where tamper-proof evidence is legally required Large SaaS protecting millions of audit log records Multi-tenant SaaS proving data integrity to customers Privileged Access Management reducing DBA fraud risk HIPAA compliance protecting medical record access logs Roadmap Currently at MVP (v0.1.*). Here's what's coming: Phase 2: Core Innovation 🔥 Multi-Region Witness Nodes & Zero-Trust Architecture Geographically distributed Raft consensus External witness nodes that participate in consensus but can't see actual data (hash-only mode) Auto-rotation of witness nodes to prevent long-term attacks Even if customer compromises all their nodes → external witnesses detect it External Anchoring S3 Object Lock (WORM) integration — ~$0.001/year Optional blockchain anchoring (Ethereum/Bitcoin) for public verifiability Performance Optimization Incremental Merkle Tree (new inserts only, handles billions of records) CDC batch processing (10x throughput improvement) Phase 3: Platform Features Multi-tenant support Web dashboard Kubernetes Operator & Terraform Provider Compliance report generation (SOC2, ISO27001) Why "Lightweight" Matters Complex audit tools don't get adopted. Hyperledger Fabric: Great tech, but Kafka + CouchDB + CA + MSP is too much immudb: Migration cost to a dedicated DB is high Commercial tools: Agents, servers, license management... Witnz extracts just the essence of auditing into one binary. ✅ PostgreSQL only (RDS/Aurora/Cloud SQL compatible) ✅ Zero additional infrastructure ✅ One config file ✅ Start with systemd and forget about it Contributing This is a fresh OSS project tackling DB auditing with Merkle Trees + Raft. Contributions welcome: Bug reports & feature requests (Issues) Code contributions (PRs) Documentation improvements Performance benchmarks Other DB backends (MySQL, MariaDB) Use case sharing Especially looking for contributors interested in distributed systems, cryptography, and DB internals! ⭐ Stars, Issues, and PRs are greatly appreciated! 🔗 https://github.com/Anes1032/witnz Witnz = Witness + z (lightweight plurality) Multiple witnesses watching over your database. Tech Stack: Merkle Tree, Raft Consensus, PostgreSQL Logical Replication, Go
dev.to

World Cup fans blast FIFA over ticket prices
nbcnews.com

TCL has been a solid competitor in the midrange TV market for years, going head-to-head with Hisense. Not only are TVs from both manufacturers competitive on features, they even use similar nomenclature to categorize their TV lines - the TCL QM8 and Hisense U8, QM7 and U7, and QM6 and U6. But with TCL's new […]
theverge.com

In Episode #6 of the Agent Factory podcast, Vlad Kolesnikov and I were joined by Keith Ballinger, VP and General Manager at Google Cloud, for a deep dive into the transformative future of software development with AI. We explore how AI agents are reshaping the developer's role and boosting team productivity. This post guides you through the key ideas from our conversation. Use it to quickly recap topics or dive deeper into specific segments with links and timestamps. Keith Ballinger on the Future of Development What is "Impossible Computing"? Timestamp: [01:51] Keith Ballinger kicked off the discussion by redefining a term from his personal blog: "Impossible Computing." For him, it isn't about solving intractable computer science problems, but rather about making difficult, time-consuming tasks feel seamless and even joyful for developers. He described it as a way to “make things that were impossible or at least really, really hard for people, much more easy and almost seamless for them.” AI's Impact on Team Productivity Timestamp: [05:03] The conversation explored how AI's impact extends beyond the individual developer to the entire team. Keith shared a practical example of how his teams at Google Cloud use the Gemini CLI as a GitHub action to triage issues and conduct initial reviews on pull requests, showcasing Google Cloud's commitment to AI-powered software development. This approach delegates the more mundane tasks, freeing up human developers to focus on higher-level logic and quality control, ultimately breaking down bottlenecks and increasing the team's overall velocity. The Developer's New Role: A Conductor of an Orchestra Timestamp: [09:57] A central theme of the conversation was the evolution of the developer's role. Keith suggested that developers are shifting from being coders who write every line to becoming "conductors of an orchestra." In this view, the developer holds the high-level vision (the system architecture) and directs a symphony of AI agents to execute the specific tasks. This paradigm elevates the developer's most critical skills to high-level design and context engineering—the craft of providing AI agents with the right information at the right time for efficient software development. The Factory Floor The Factory Floor is our segment for getting hands-on. Here, we moved from high-level concepts to practical code with live demos from both Keith and Vlad. Showcase: The Terminus and Aether Projects Timestamps: [21:02] and [28:17] Keith shared two of his open-source projects as tangible "demonstration[s] of vibe coding intended to provide a trustworthy and verifiable example that developers and researchers can use." Terminus: A Go framework for building web applications with a terminal-style interface. Keith described it as a fun, exploratory project he built over a weekend. Aether: An experimental programming language designed specifically for LLMs. He explained his thesis that a language built for machines—highly explicit and deterministic—could allow an AI to generate code more effectively than with languages designed for human readability. Vibe Coding a Markdown App Timestamp: [31:41] Keith provided a live demonstration of his vibe coding workflow. Starting with a single plain-English sentence, he guided the Gemini CLI to generate a user guide, technical architecture, and a step-by-step plan. This resulted in a functional command-line markdown viewer in under 15 minutes. Creating a Video with AI Timestamp: [47:13] Vlad showcased a different application of AI agents: creative, multi-modal content generation. He walked through a workflow that used Gemini 2.5 Flash Image (also known as Nano Banana) and other AI tools to generate a viral video of a capybara for a fictional ad campaign. This demonstrated how to go from a simple prompt to a final video. Inspired by Vlad's Demo? If you're interested in learning how to build and deploy creative AI projects like the one Vlad showcased, the Accelerate AI with Cloud Run program is designed to help you take your ideas from prototype to production with workshops, labs, and more. Take the next step and register here. Developer Q&A Timestamp: [56:37] We wrapped up the episode by putting some great questions from the developer community to Keith. On Infrastructure Bottlenecks for AI Workloads Timestamp: [56:42] Keith explained that he sees a role for both major cloud providers and a "healthy ecosystem of startups" in solving challenges like GPU utilization. He was especially excited about how serverless platforms are adapting, highlighting that Cloud Run now offers GPUs to provide the same fast, elastic experience for AI workloads that developers expect for other applications. On Multi-Cloud and Edge Deployment for AI Timestamp: [_https://youtu.be/I-xS4nw-HfU?feature=shared&t=3496]_ In response to a question about a high-level service for orchestrating AI across multi-cloud and edge deployment, Keith was candid that he hasn't heard a lot of direct customer demand for it yet. However, he called the area "untapped" and invited the question-asker to email him, showing a clear interest in exploring its potential. On AI in Regulated Industries (Finance, Legal) Timestamp: [59:13] Calling it the "billion-dollar question," Keith emphasized that as AI accelerates development, the need for a mature and robust compliance regime becomes even more critical. His key advice was that the human review piece is more important than ever. He suggested the best place to start is using AI to assist and validate human work. For example, brainstorm a legal brief with an AI rather than having the AI write the final brief for court submission. We concluded this conversation feeling inspired by the future of AI in software development and the potential of AI Agents and the Gemini CLI. For the complete conversation, listen to our full episode with Keith Ballinger now. Connect with us Keith → GitHub Vlad → LinkedIn Mollie → LinkedIn
dev.to

House Democrats release new photos from Epstein estate
nbcnews.com
Explore Verification Token 1764728136303, a quantum-resistant auth innovation. Dive into trends like AI integration, blockchain anchoring, and real-world apps in fintech, social media, and IoT for unbreakable digital security.
articles
Explore zero-prompt AI assistants that infer intentions without commands. Dive into trends like Project Astra, practical apps in homes and businesses, and the ethical future of intuitive tech.
articles
Discover how autonomous agent swarms are building self-managing digital ecosystems, from supply chains to DeFi. Explore trends, applications, and the future of AI-driven intelligence.
articles
Discover how on-device LLM acceleration is powering private edge AI on smartphones and IoT, boosting privacy, speed, and offline capabilities with the latest hardware, models, and apps.
articles
Discover how Neural Context Engines are transforming real-time personalization in 2025, from e-commerce to healthcare. Explore trends, applications, and the future of context-aware AI. (148 chars)
articles
Discover how AI copilots are transforming coding, writing, and design with real-time assistance. Explore latest trends, tools like GitHub Copilot and Adobe Firefly, and productivity gains up to 55%.
articles
Discover how personalized AI companions are transforming emotional and social interactions with cutting-edge trends, real-world applications, and ethical insights. From mental health support to AR buddies, explore the future of digital companionship.
articles
Dive into AI voice cloning: from tech basics to trends like ElevenLabs' instant synthesis. Explore apps in entertainment, business, accessibility, plus ethics and future outlook. Lifelike voices from seconds of audio are here.
articles
Discover how AI tools like Sora and Runway are creating stunning videos from text or images. Explore trends, applications in marketing & entertainment, and the future of content creation.
articles
Explore multimodal AI models that integrate text, images, audio, and video. Latest trends like GPT-4o real-time processing, applications in healthcare and AV, challenges, and future outlook.
articles
Discover how multi-agent Personal AI OS are revolutionizing daily life—from morning routines to work productivity. Explore trends, apps, and the future of autonomous AI assistants. (148 chars)
articles
Discover AI agents: autonomous systems that plan, act, and learn to complete tasks independently. Explore 2024 trends, real-world apps, challenges, and the future of agentic AI revolutionizing work.
articles
Subscribe to our newsletter for weekly curated content and latest updates.
Get weekly updates delivered to your inbox